Dynamic Assets API
This documentation provides a complete guide for integrating with the Dynamic Assets API of Hyperfox. The API allows you to manage dynamic data structures through a RESTful interface with OAuth 2.0 authentication.
Concepts
Data model: Master Data vs. Reference data
Hyperfox is built on a flexible architecture that uses dynamic assets which can vary between configurations.
Every configuration includes a standard set of assets and fields that are automatically deployed and cannot be modified by users. These are referred to as Master Data. Master Data represents the core, immutable data structure that forms the foundation of every Hyperfox configuration.
Each configuration can define custom assets to meet specific business requirements. These are referred to as Reference Data. The structure of Reference Data can be altered in order to serve specific needs.
| Aspect | Master Data | Reference Data |
|---|---|---|
| Customizable | No | Yes |
| Flag | supporting_data: false | supporting_data: true |
| Examples | dyn_stock_order, dyn_order_line | dyn_item, dyn_party |
| Deployment | Automatic, standard | Automatic and configurable |
Configuration Models
Each model has its own Master Data structure tailored to its specific use case. The structure of the assets differs depending on which configuration model your implementation follows:
Stock Model
Optimized for inventory and warehouse management scenarios.
| Asset | Purpose | Supporting Data |
|---|---|---|
dyn_stock_order | Stores all order information. | No |
dyn_order_line | Stores all order line information. | No |
dyn_item | Needs to be filled with product information. | Yes |
dyn_party | Needs to be filled with customer information. | Yes |
dyn_unit_code | Needs to be filled with the unit codes in which a product can be ordered (e.g. piece, box). | Yes |
dyn_item_units_of_measure | Can be filled with palletization information. | Yes |
dyn_party_items | Can be used to determine which product a customer is allowed to order. | Yes |
dyn_order_history | Can be used to store the order history per client so that Hyperfox looks into the history in order to be more accurate for new orders. | Yes |
Transport Model
Optimized for logistics and transportation workflows.
| Asset | Purpose | Supporting Data |
|---|---|---|
dyn_transport_order | Stores all transport information, e.g. overall notes, transport reference. | No |
dyn_consignment | Stores information regarding the specific consignment(s) related to a transport, e.g. loading and unloading information. | No |
dyn_goods | Stores information about the goods which are transported, e.g. the specific product, measurements, weight, quantity. | No |
dyn_allowance_charge | Stores information about charges related to a transport. | No |
dyn_party | Needs to be filled with customer information. | Yes |
dyn_location | Needs to be filled with address data. | Yes |
dyn_packing_type | Needs to be filled with the different packing types which can be transported, e.g. pallets, big bag. | Yes |
dyn_allowance_charge_reason | Needs to be filled with the different allowance charges, e.g. fuel charges, Maut. | Yes |
Order Status Update
When an order is validated and approved within Hyperfox the data is sent over the connected integration of your configuration. The order will be shown with status submitted within the archive. Any connected integration can update the processing_status and can set it to any of the following:
Valid States
pending_validation— A new order created by the platform.saved— An order that has some data altered and has been saved.validated— All required fields have been filled out and the order can be exported.rejected— The order doesn't need to be exported.submitted— The order has been sent to the external integration.submit_failed— The external integration has gone into error or denied the order. The integration can return an error which is shown in the interface of Hyperfox.completed— The order has been successfully received by the external integration.
Content Type
All requests should use Content-Type: application/json.
Response Format
All successful responses return data in the following format:
{
"data": {
"id": "uuid",
"field1": "value1",
"field2": "value2",
"created_at": "2024-01-01T00:00:00.000000Z",
"updated_at": "2024-01-01T00:00:00.000000Z"
}
}
For collections (list endpoints), the response includes pagination:
{
"data": [
{
"id": "uuid",
"field1": "value1"
}
],
"links": {
"first": "url",
"last": "url",
"prev": null,
"next": "url"
},
"meta": {
"current_page": 1,
"from": 1,
"last_page": 5,
"per_page": 25,
"to": 25,
"total": 100
}
}
Authentication
The API uses OAuth 2.0 Client Credentials flow. First, obtain an access token using the OAuth Token Request, then use it in the Authorization header for all subsequent requests.
Creating credentials
Credentials can be generated within the Hyperfox tenant by a user with the admin role. Users can create, remove and regenerate credentials by navigating to Settings → API Clients.
Obtaining an access token
To interact with the API, you need a valid access token in JWT (JSON Web Token) format. This token authenticates your requests and authorizes access to protected endpoints. The process involves using Basic Authentication to request the access token (JWT) via the /oauth/token endpoint.
This token will need to be refreshed after expiration; we provide the expires_in field in the token response to check validity.
Scopes
The Dynamic Assets API uses OAuth 2.0 Client Credentials flow with scope-based access control. Scopes define the level of access your application has to different types of data and operations.
When requesting an access token, you specify which scopes your application needs in the scope parameter. The access token will only grant permissions for the requested scopes, and any API calls requiring additional scopes will be rejected.
Available scopes
read:reference_data— Read access to reference data (supporting_data = true)write:reference_data— Write access to reference data (supporting_data = true)read:order_data— Read access to master data of type orders (supporting_data = false)write:order_data— Write access to master data of type orders (supporting_data = false)
You can request multiple scopes in a single token by separating them with spaces:
{
"scope": "read:reference_data write:reference_data read:order_data"
}
OAuth Generate Token
Example request
POST /oauth/token
Content-Type: application/json
{
"grant_type": "client_credentials",
"client_id": "your_client_id",
"client_secret": "your_client_secret",
"scope": "read:reference_data write:reference_data"
}
Response
{
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9...",
"token_type": "Bearer",
"expires_in": 3600
}
Using the Token
Include the access token in the Authorization header for all API requests:
Authorization: Bearer your_access_token
Environments
The Dynamic Assets API uses a structured URL format that includes your tenant name, environment, and API version. Understanding this structure is essential for correctly configuring your API integration. All API endpoints follow this base URL pattern.
Components
tenant— Your organization's unique tenant identifier.environment— The environment you're connecting to (staging or production).table— The dynamic table name you're working with (e.g.dyn_party,dyn_item,dyn_order).endpoint— The specific endpoint path (e.g.assets,info).
Base URLs
- Production:
https://{tenant}.api.hyperfox.cloud - Staging:
https://{tenant}.staging.testerfox.eu
Example API Endpoint
https://{tenant}.api.hyperfox.cloud/api/v1/dynamic-assets/dyn/{table}/assets
Postman Collection (with examples)
We have a Postman collection available with interactive examples. Download it to a local folder and import it into Postman for usage.
Instructions
- Make sure you have a Postman client available — either a locally installed version or the web-based version.
- Download and import the collection.
- Set the collection variables.
- Execute the Request Access Token request in the Authentication folder. You now have a valid access key for 1 hour. If you get the error message that the key is no longer valid, refresh the access key by invoking this endpoint again.
- You can now explore the different order endpoints.
Field Types & Constraints
Field Types
Dynamic assets support various field types:
- String — Text values
- LongText — Extended text content
- Integer — Whole numbers
- Decimal — Decimal numbers
- Date — Date values (
YYYY-MM-DD) - DateTime — Date and time values (ISO 8601)
- Boolean —
true/falsevalues
Field Constraints
Fields can be:
- Required — Must be provided in create/update requests
- Nullable — Can be null or omitted
API Endpoints
GET List All Assets
TODO: create endpoint to fetch all assets and/or way to see all the assets in the interface.
GET Get Asset Info
List all the field schema of the asset {table}.
GET /api/v1/dynamic-assets/dyn/{table}/info
Parameters
table— The table name of the asset type.
Response: Data object with all fields and relationships.
Required Scopes
read:reference_datafor reference dataread:order_datafor order data
GET List Assets Details
List all records of an asset {table}.
GET /api/v1/dynamic-assets/dyn/{table}/assets
Parameters
table— The table name of the asset type.
Response: Paginated list of assets (25 per page). See Pagination on how to handle pagination.
Required Scopes
read:reference_datafor reference dataread:order_datafor order data
GET Get Single Asset Detail
Return a single record {asset_id} located in {table}.
GET /api/v1/dynamic-assets/dyn/{table}/assets/{asset_id}
Parameters
table— The table name of the asset type.asset_id— The UUID of the asset.
Required Scopes
read:reference_datafor reference dataread:order_datafor order data
POST Create Asset
Adds a new item to asset {table}.
POST /api/v1/dynamic-assets/dyn/{table}/assets
Parameters
table— The table name of the asset type.
Request Body: JSON object with asset fields.
Response: 201 Created with the created asset.
Required Scopes
write:reference_datafor reference datawrite:order_datafor order data
PUT Update Asset
Updates the item with id {asset_id} in asset {table}.
PUT /api/v1/dynamic-assets/dyn/{table}/assets/{asset_id}
Parameters
table— The table name of the asset type.asset_id— The UUID of the asset.
Request Body: JSON object with asset fields to update.
Response: 200 OK with the updated asset.
Required Scopes
write:reference_datafor reference datawrite:order_datafor order data
POST Batch Upsert Assets
Update existing and insert new records within the asset {table}.
POST /api/v1/dynamic-assets/dyn/{table}/assets/batch
Parameters
table— The table name of the asset type.
Request Body: JSON object containing an array of assets to upsert.
{
"items": [
{
"field1": "value1",
"field2": "value2"
},
{
"field1": "value3",
"field2": "value4"
}
]
}
Batch Rules
- Minimum 1 item, maximum 1000 items per batch.
- Each item is validated against the asset type's field requirements.
- Uses the asset type's unique index field for upsert operations (create if not exists, update if exists).
- All operations are performed within a database transaction.
Response: 200 OK with success message.
{
"data": {
"message": "Batch upsert completed successfully"
}
}
Required Scopes
write:reference_datafor reference data- Order data cannot be written via batch operations.
Practical Example
This example demonstrates a mixed batch operation that both creates new records and updates existing ones in a single request. Assume we have a dyn_party table with a unique index on external_id:
POST /api/v1/dynamic-assets/dyn/dyn_party/assets/batch
Authorization: Bearer your_access_token
Content-Type: application/json
{
"items": [
{
"name": "New Party",
"street": "New Street",
"city": "New City",
"external_id": "party-1"
},
{
"name": "Newer Party",
"street": "Newer Street",
"city": "Newer City",
"external_id": "party-2"
}
]
}
In this example:
- If
party-1already exists, it will be updated with the new values. - If
party-2doesn't exist, it will be created as a new record. - The
external_idfield serves as the unique identifier for the upsert operation. - Both operations succeed or fail together (atomic transaction).
- The id can be omitted because we will use the unique index of the asset.
Error Handling
If validation fails for any item in the batch, the entire operation is rejected:
{
"message": "Validation failed",
"errors": {
"items": ["The items field is required."],
"items.0.name": ["This field is required"],
"items.1.email": ["The email field must be a valid email address"]
}
}
Batch Tracking with _batch_id
Reference data asset types (supporting_data: true) automatically include a system-managed _batch_id field on their database table. This field tracks which batch synchronization last wrote each record, enabling stale-data cleanup after a full synchronization.
When performing batch upserts on reference data, include a _batch_id value in each item to tag all records belonging to the same synchronization run:
{
"items": [
{
"name": "Party A",
"external_id": "party-1",
"_batch_id": "sync-2026-03-03-001"
},
{
"name": "Party B",
"external_id": "party-2",
"_batch_id": "sync-2026-03-03-001"
}
]
}
Key characteristics of _batch_id:
- Automatically added to all reference data tables — no manual configuration needed.
- Nullable — not required, but must be provided when you intend to use the Clean Up endpoint.
- Hidden from standard API responses (system field prefixed with
_).
After completing a full data sync via batch upsert, use the Clean Up endpoint to remove records that were not included in the latest batch.
POST Clean Up Assets
Remove stale records from a reference data asset {table} that were not part of the most recent batch synchronization. This endpoint is only available for reference data (supporting_data: true).
POST /api/v1/dynamic-assets/dyn/{table}/assets/clean-up
Parameters
table— The table name of the asset type.
Request Body
{
"batch_id": "sync-2026-03-03-001"
}
| Field | Type | Required | Description |
|---|---|---|---|
batch_id | string | Yes | The batch identifier that was used in the preceding batch upsert. |
How it works
The endpoint deletes all records in the table whose _batch_id does not match the provided batch_id. This effectively removes any data that was not included in the most recent synchronization.
Response: 200 OK
{
"data": {
"message": "Clean up completed successfully",
"deleted": 42
}
}
Required Scopes
write:reference_data
Restrictions
Only available for reference data asset types (supporting_data: true). Attempting to clean up master data will return an error:
{
"message": "This asset type cannot be cleaned up",
"error": "This operation can only be executed safely on reference data"
}
Typical Data Synchronization Workflow
The _batch_id and clean up mechanism work together to keep reference data in sync with an external system:
- Batch Upsert — Send all current records with a unique
_batch_id(e.g. a timestamp or sync run identifier). - Clean Up — Call the clean up endpoint with the same
batch_idto remove records that were not part of the sync. - Result — The table now contains exactly the records from the latest synchronization.
Example
Step 1: Batch upsert with _batch_id = "sync-001"
→ Records A, B, C are upserted with _batch_id = "sync-001"
→ Record D (from previous sync) still has _batch_id = "sync-000"
Step 2: Clean up with batch_id = "sync-001"
→ Record D is deleted (its _batch_id ≠ "sync-001")
→ Records A, B, C remain
Result: Table contains only the current dataset
Error Handling
HTTP Status Codes
200— Success201— Created (for POST requests)401— Unauthorized (invalid or missing token)403— Forbidden (insufficient scopes)404— Not Found (asset or table doesn't exist)422— Validation Error (invalid data)500— Server Error
Error Response Format
Validation Errors (422):
{
"message": "Validation failed",
"errors": {
"field_name": [
"This field is required"
],
"another_field": [
"This field must be a valid email"
]
}
}
Other Errors:
{
"message": "Error description",
"error": "Detailed error information"
}
Pagination
List endpoints in the Dynamic Assets API return paginated results to ensure optimal performance when working with large datasets. The API uses page-based pagination with a default page size of 25 items.
To retrieve paginated data, add the page query parameter to your request. If omitted, the API returns the first page by default.
Example request
GET /api/v1/dynamic-assets/dyn/dyn_party/assets?page=1
Response structure
Paginated responses contain three main sections:
Data Array
Contains the actual items for the current page.
Links Object
Provides pre-built URLs for easy navigation:
first— URL to the first page.last— URL to the last page.prev— URL to the previous page;nullif on first page.next— URL to the next page;nullif on last page.
Meta Object
Contains pagination metadata:
current_page— The current page number.from— Index of the first item on this page.to— Index of the last item on this page.per_page— Number of items per page (always 25).last_page— Total number of pages.total— Total number of items across all pages.
Important Notes
- The
pageparameter accepts positive integers starting from 1. - Requesting a page number beyond
last_pagereturns an empty data array. - The
per_pagevalue is fixed at 25 items and cannot be changed. - All list endpoints follow this pagination structure consistently.
Webhooks
Users with the admin role can configure webhooks that subscribe to event data in Hyperfox (e.g. order creation). Webhooks are created and managed via Settings → Webhook.
All webhooks contain 2 properties in the root:
validated_order{}which holds all data related to the order.custom_data{}which holds all custom properties which can be added to the configuration.
Secure data transfer
Webhooks can only be sent to HTTPS encrypted endpoints.
Webhook validation
The content of the webhook can be verified against its Signature embedded within the header data. In order to verify the content, use HMAC-SHA256 (RFC 4868) in combination with the secret which is generated during the setup of the webhook in Hyperfox.
To hash the content of the response — this should produce the same hash value as the signature header.
Example header information
{
"Signature": "bb463fed379eeffa4c0828d7ee6b04f08cccc0842a6276f1f2e5073d365cd54b",
"Content-Type": "application/json"
}
Webhook Error Handling
Our webhook mechanism tries to deliver the webhook data up to 5 times. When the webhook can't deliver its data after the 5th time it will result in a hard fail.
The corresponding order will bounce back to the To Validate screen with the status failed. A failed reason will be visible within the details of the corresponding order.
An exponential backoff is implemented between each retry.
Important Notes
- Field Requirements — Each asset type has its own field schema. Required fields must be included in create/update requests.
- UUIDs — All asset IDs are UUIDs, not sequential integers.
- Scope Requirements — Always ensure your OAuth token has the appropriate scopes for the operations you need to perform.
- Rate Limiting — Be mindful of rate limits (specific limits depend on server configuration).
- Batch Operations — Batch upsert operations use unique indexes to determine whether to create or update records. Ensure your asset type has appropriate unique indexes configured.
- Data Synchronization — When using batch upsert with
_batch_idand the clean up endpoint, always ensure you use the samebatch_idvalue across both requests. Calling clean up with a wrongbatch_idwill delete records unintentionally.
This documentation is based on API version 1 and is subject to updates.