# MASTER STRUCTURED INSTRUCTIONS
# * VERSION 1
# * CONTAINS INSTRUCTIONS FOR A DESIRED STRUCTURED OUTPUT THAT MUST BE STRICTLY ENFORCED
# Elton Boehnen Email: boehnenelton2024@gmail.com GitHub: github.com/boehnenelton Portfolio: boehnenelton2024.pages.dev
* No Markdown
* Every field is positionally aligned with each value. There must be the same amount of fields as values.
* Think of a spreadsheet every set of values is a record, you will be creating a record every single time you send a message . I want file1, file2, file3 etc. Each sent file has a file name and it's content as a minimum. The content is where you will be posting each file that you send me,
* If you want to comment or speak, speak through the Gemini_Response value.
* If not using a certain aspect of the schema then leave it null. * Projects should be assigned a name and a version,
* The version should increment by 1 each time there's a change to any files in the project.
* You should generate a project name if I don't give you one.
*If you're posting files that need to be aligned properly within a directory structure use project_tree. Example:
/project_name/example_file1.extension
/project_name/example_config/config1.extension
* REQUIRED SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104a",
"Format_Creator": "Standard Template",
"Parent_Hierarchy": "",
"Records_Type": [
"Project"
],
"Fields": [
{
"name": "zip_file_name",
"type": "string"
},
{
"name": "project_tree",
"type": "string"
},
{
"name": "project_version",
"type": "string"
},
{
"name": "file1_name",
"type": "string"
},
{
"name": "file1_description",
"type": "string"
},
{
"name": "file1_content",
"type": "string"
},
{
"name": "file2_name",
"type": "string"
},
{
"name": "file2_description",
"type": "string"
},
{
"name": "file2_content",
"type": "string"
},
{
"name": "file3_name",
"type": "string"
},
{
"name": "file3_description",
"type": "string"
},
{
"name": "file3_content",
"type": "string"
},
{
"name": "file4_name",
"type": "string"
},
{
"name": "file4_description",
"type": "string"
},
{
"name": "file4_content",
"type": "string"
},
{
"name": "file5_name",
"type": "string"
},
{
"name": "file5_description",
"type": "string"
},
{
"name": "file5_content",
"type": "string"
}
],
"Values": [
[
"",
"",
"100",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
]
]
}
#### ENVIRONMENT STRUCTURE ####
# - - - - - - - #
* Source BEJSON_LIB1.py instead of BEJSON_Standard_Lib
# LIBRARY: BEJSON-LIB1.py
# FORMATS: 104, 104a, 104db (Unified Core)
# VERSION: 2.1.0 (Merged / Extended)
# DATE: 2025-12-25
# DESCRIPTION: Single engine for creating, loading, and manipulating all BEJSON formats.
# - Preserves original
# - Adds functions: parsing fields, extracting column values exporting CSV, simple XML export, markdown-to-fields import, sorting, validation, subset/export helpers and other utilities that operate on the. currently loaded BEJSON document in memory.
* Added PYDROID_LIBRARIES folder and env var. New home for fallback sourcing library files (second to the script subdirectory).
!
* Scripts are now required in their header to list any library dependencies they may have along with the library's version. libraries are now required to be versioned at the top in the header and they are to be incremented on change the version should go up every change.
* LIBRARIES ARE TO BE LOOKED FOR IN THE SCRIPT'S OWN PATH FIRST THEN CHECK Libraries path.
* SECOND AS A FALLBACK SCRIPT SHOULD FOCUS ON PORTABILITY OVER ALL.
! REQUIRED: !
- Source with this line at the top of every script:
- TERMUX/BASH OR IN~/.bashrc
. source "/storage/emulated/0/Home/Termux/Global/env_paths.sh"
)
- PYDROID/PYTHON:
from /storage/emulated/0/Home/Pydroid/Global/env_paths.py import *
AVAILABLE PATH VARIABLES (after sourcing the correct file)
SDHOME = /storage/emulated/0/Home
SERVER_HOME = ${SDHOME}/Server
SERVER_APPS = ${SDHOME}/Server/Apps
SERVER_TEMPLATES = ${SDHOME}/Server/Templates
SERVER_DB = ${SDHOME}/Server/Databases
SERVER_SESSIONS = ${SDHOME}/Server/Sessions
TERMUX_HOME = ${SDHOME}/Termux
TERMUX_SCRIPTS = ${SDHOME}/Termux/Scripts
TERMUX_GLOBAL = ${SDHOME}/Termux/Global
TERMUX_LIBRARIES = ${SDHOME}/Termux/Libraries
TERMUX_APPDATA = ${SDHOME}/Termux/AppData
TERMUX_PROCESSING_IN = ${SDHOME}/Termux/Processing/In
TERMUX_PROCESSING_OUT = ${SDHOME}/Termux/Processing/Out
TERMUX_PROCESSING_COMPLETED = ${SDHOME}/Termux/Processing/Completed
TERMUX_LINK_REACT_APPS = ${SDHOME}/Termux/Link/React/Apps
TERMUX_LINK_GIT_REPOSITORIES = ${SDHOME}/Termux/Link/Git/Repositories
TERMUX_LINK_FFMPEG_MEDIA = ${SDHOME}/Termux/Link/FFMpeg/Media
PYDROID_HOME = ${SDHOME}/Pydroid
PYDROID_SCRIPTS = ${SDHOME}/Pydroid/Scripts
PYDROID_APPDATA = ${SDHOME}/Pydroid/AppData
PYDROID_GLOBAL = ${SDHOME}/Pydroid/Global
PYDROID_TEMPLATES = ${SDHOME}/Pydroid/Templates
PYDROID_LEGACY = ${SDHOME}/Pydroid/Legacy
PYDROID_BACKUPS = ${SDHOME}/Pydroid/Backups
DATA_HOME = ${SDHOME}/Data
DATA_PROMPTS = ${SDHOME}/Data/Prompts
DATA_STANDARDS = ${SDHOME}/Data/Standards
DATA_CONTEXT = ${SDHOME}/Data/Context
DATA_SCHEMAS = ${SDHOME}/Data/Schemas
#### MIGRATION STEPS ####
# - - - - - #
1. Add the correct sourcing line to every script (Python or Bash as shown above).
2. Replace any previously hard-coded paths with the variables listed here.
3. Update imports to use the new BEJSON_LIB1.py: BEJSON_Extended_Lib.
If currently sourcing BEJSON_Standard_Lib replace it with BEJSON_LIB1.py unless you're source and currently from your own scripts path and then it can use whatever library it wants. Source from the Libraries folder as a fallback but script should look in their own folder for their libraries first
## APPROVED MODEL LIST ##
# - - - - - #
The following model IDs must be selectable via combo box if dealing with Gemini.
* All AI-integrated applications must provide a ComboBox for manual model
selection.
* REQUIRED:
- gemini-2.5-flash (DEFAULT)
- gemini-flash-latest
- gemini-2.5-pro
- gemini-flash-lite-latest
- gemini-2.0-flash
* OPTIONAL MODELS: (BY MY REQUEST, REMOVE OTHERWISE)
- gemini-3-flash-preview
- gemini-3-pro-preview
* PASSIVE INITIALIZATION
Libraries must NOT "test" keys on startup. This burns quota
unnecessarily. Keys are only tried at the moment of generation.
#### MISC STANDARDS ####
* THE "BLACK BOX" PROHIBITION
Applications are strictly prohibited from "hanging" without feedback.
Users must know exactly what the engine is doing at all times.
* MANDATORY STATUS BAR
A high-visibility Status Bar or Indicator Label is required.
It must reflect the following states:
- IDLE: System ready.
- SENDING: Query has left the device.
- PROCESSING: Google has received the query and is generating.
- SUCCESS: Data received and parsed.
- ERROR: Specific failure reason (e.g., "429 QUOTA EXHAUSTED").
* BUTTON LOCKOUT
During an active query, the "GENERATE" button must be disabled and
renamed (e.g., "WORKING...") to prevent duplicate pings and quota waste.
* FILE PROCESSING
* Before potentially a large amount of files in Seattle process to a total count of files to be processed deliver the account and then give live feedback during the processing showing total remaining total completed.
* NON-BLOCKING UI
AI queries must NEVER run on the main UI thread. This causes the
application to appear "Crashed" or "Frozen" on Android.
* PERSISTENT INDEXING
The library must save the index of the "Last Working Key" to a
state file (gemini_state.json). On the next run, the engine starts
at that index, skipping known-failed or lower-priority keys.
* RAW ERROR MIRRORING
If a key fails, the library must pass the RAW error string from
Google back to the UI Status Bar. This allows the user to distinguish
between a "Safety Block," a "Quota Error," or a "Connection Issue."
##### SCRIPT HEADERS #####
# - - - - - #
* Script Identity:
Define SCRIPT_NAME (underscores, no spaces) and SCRIPT_VERSION (starts at 100).
* Crediting: Header comment must include:
SCRIPT_NAME,
SCRIPT_VERSION
Elton Boehnen boehnenelton2024@gmail.com
#### PATHING FALLBACK ####
# - - - - - #
* BEST PRACTICES EXAMPLE:
if 'SERVER_SESSIONS' not in globals(): SERVER_SESSIONS = os.path.join(SCRIPT_DIR, "Server_Sessions")
if 'PYDROID_SCRIPTS' not in globals(): PYDROID_SCRIPTS = SCRIPT_DIR
if 'PYDROID_GLOBAL' not in globals(): PYDROID_GLOBAL = os.path.join(SCRIPT_DIR, "Global")
if 'DATA_HOME' not in globals(): DATA_HOME = os.path.join(SCRIPT_DIR, "Data")
if 'DATA_CONTEXT' not in globals(): DATA_CONTEXT = os.path.join(DATA_HOME, "Context")
###### BEJSON CONTEXT #####
# - - - - - #
Chapter 1: Introduction to BEJSON 104
BEJSON Format Version 104 serves as the foundational, strict standard for structured data interchange. Its core purpose is to enforce a single-record-type standard, meaning every BEJSON 104 file must define and contain records of only one entity type (e.g., only 'Book' records or only 'Chapter' records). This strict policy ensures maximum predictability, consistency, and efficiency in parsing, making it ideal for applications requiring fixed, table-like data structures.
Mandatory Top-Level Keys and Strict Header Policy
To maintain the high integrity and predictability of the format, BEJSON 104 enforces a strict header policy. Only the following six top-level keys are permitted in a Version 104 document. Custom or application-specific keys are explicitly forbidden, ensuring that external systems can always rely on a fixed metadata structure:
Format: Must contain the exact string value "BEJSON".
Format_Version: Must contain the exact string value "104".
Format_Creator: Must contain the exact string value "Elton Boehnen".
Records_Type: An array containing precisely one string, defining the sole entity type contained in the document.
Fields: An array of objects that meticulously defines the name and data type (string, integer, number, boolean, array, object) for every attribute in the record.
Values: An array of arrays, where each inner array represents a single data record. The values within each record must correspond positionally and strictly adhere to the type defined in the corresponding entry in the Fields array.
This strict adherence to a fixed set of mandatory headers and a single record type is the defining characteristic of BEJSON 104.
Basic Schema Example
The following schema illustrates the fundamental structure by defining a simple Book record. It demonstrates how the mandatory keys are used to define the schema and populate the data.
SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104",
"Format_Creator": "Elton Boehnen",
"Records_Type": [
"Book"
],
"Fields": [
{
"name": "book_id",
"type": "string"
},
{
"name": "title",
"type": "string"
},
{
"name": "author",
"type": "string"
},
{
"name": "publication_year",
"type": "integer"
}
],
"Values": [
[
"B001",
"The Martian Chronicles",
"Ray Bradbury",
1950
],
[
"B002",
"Dune",
"Frank Herbert",
1965
]
]
}
Chapter 2: Core Concepts and Complex Types in BEJSON 104
While BEJSON 104 enforces a strict, single-record-type structure, it provides robust support for modeling complex, hierarchical data within those records through the inclusion of two complex data types: array and object. These types are defined in the Fields array just like primitive types, allowing data consumers to anticipate and process nested structures efficiently.
Support for Complex Types: 'array' and 'object'
Array (array): This type is used when a field needs to hold multiple values. It allows for multi-valued attributes within a single record, such as a list of tags, contributors, or historical dates. When an array is defined in Fields, the corresponding value in the Values array must be a JSON array. Crucially, BEJSON 104 does not require defining the types of elements within the array, offering flexibility for lists of mixed or unspecified types.
Object (object): This type enables the representation of nested, structured data. An object field allows a record to contain a sub-structure of named key-value pairs (a JSON object). This is ideal for grouping related attributes logically, such as embedding detailed location information, dimensional specifications, or version control metadata directly within the parent record.
The use of array and object types ensures that BEJSON 104 documents remain highly expressive, capable of representing rich, nested data models while adhering to the strict top-level schema policy.
The Parent_Hierarchy Convention
To enhance data organization and discoverability, BEJSON 104 introduces the conventional field Parent_Hierarchy. Although this field is not a mandatory system header (as custom headers are forbidden in 104), it is highly recommended to include it as a standard string field within the Fields array.
The Parent_Hierarchy field serves as a logical path, similar to a file system directory structure (e.g., "Documents/Reports/2024"). This convention provides essential context, answering the question of where the data logically resides within a larger data ecosystem. Its consistent application across BEJSON 104 files allows external systems to build a navigable, file-system-like view of the data collection, significantly improving data governance and integration.
Schema Example: Complex Types and Parent_Hierarchy
SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104",
"Format_Creator": "Elton Boehnen",
"Records_Type": [
"Document"
],
"Fields": [
{
"name": "document_id",
"type": "string"
},
{
"name": "document_title",
"type": "string"
},
{
"name": "parent_hierarchy",
"type": "string"
},
{
"name": "keywords",
"type": "array"
},
{
"name": "document_details",
"type": "object"
}
],
"Values": [
[
"DOC001",
"BEJSON 104 Specification",
"Documentation/Standards/BEJSON",
[
"format",
"schema",
"104"
],
{
"version": "1.0",
"author": "Elton Boehnen",
"status": "Final"
}
],
[
"DOC002",
"Deployment Guide Q3",
"Operations/Guides/2024",
[
"deployment",
"guide",
"ops"
],
{
"version": "1.1",
"author": "Jane Doe",
"status": "Draft"
}
]
]
}
Chapter 3: Advanced Organization and Null Handling in BEJSON 104
BEJSON Format Version 104 achieves advanced data organization and integration through the strategic use of specific fields and the strict handling of missing data. While restricted to a single record type, the format leverages conventions and structural rules to ensure data is both contextually rich and structurally sound.
Leveraging Parent_Hierarchy for Provenance
The Parent_Hierarchy field, introduced in the previous chapter, is crucial for establishing data provenance and facilitating integration into larger data ecosystems. By embedding a logical path (e.g., "Projects/Q4/Marketing") directly within the record, BEJSON 104 files gain immediate contextual awareness. This path allows external systems—such as data lakes, documentation portals, or indexing services—to automatically categorize and relate the data without requiring external metadata lookups. The hierarchy acts as a universal key for discovery, transforming a flat list of records into a navigable, organized structure that clearly defines where the data originated and where it logically belongs.
Handling Optional Data with Null Values
One of the most critical structural requirements of BEJSON 104 is maintaining fixed-position structural integrity. Every record array within the Values array must contain the exact same number of elements as there are fields defined in the Fields array. This positional mapping is non-negotiable, as it allows parsers to efficiently map a value at index N to the field defined at index N in the schema.
When data for a specific field is optional or simply absent for a given record, the value must be explicitly set to null. Using null as a placeholder ensures that the record length remains consistent, thereby preserving the positional integrity of the entire dataset. This rule applies universally to all data types, including primitive types (string, integer) and complex types (array, object). If an optional array field is missing, its value must be null, not an empty array [], unless the empty array is semantically meaningful data.
Conclusion of BEJSON 104
BEJSON Format Version 104 stands as the benchmark for strict, predictable data representation. Its adherence to a single record type, strict header policy, and fixed-position data model makes it ideal for high-throughput data pipelines, archival systems, and applications where schema enforcement and parsing efficiency are paramount. The conventions of Parent_Hierarchy and the strict use of null for missing data conclude the core structural definition of this foundational format, paving the way for its specialized variants.
SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104",
"Format_Creator": "Elton Boehnen",
"Records_Type": [
"ProjectTask"
],
"Fields": [
{
"name": "task_id",
"type": "string"
},
{
"name": "task_name",
"type": "string"
},
{
"name": "parent_hierarchy",
"type": "string"
},
{
"name": "assigned_user",
"type": "string"
},
{
"name": "due_date",
"type": "string"
},
{
"name": "completion_date",
"type": "string"
},
{
"name": "dependencies",
"type": "array"
}
],
"Values": [
[
"T001",
"Draft Initial Proposal",
"Projects/Q4/Marketing",
"Alice Smith",
"2024-11-01",
null,
[
"T002",
"T003"
]
],
[
"T002",
"Review Legal Compliance",
"Projects/Q4/Marketing",
"Bob Johnson",
"2024-10-25",
"2024-10-24",
null
],
[
"T003",
"Finalize Budget",
"Projects/Q4/Finance",
"Charlie Brown",
null,
null,
null
]
]
}
Chapter 4: Introduction to BEJSON 104a - Flexibility for Automation
BEJSON Format Version 104a is a specialized variant of the foundational 104 specification, meticulously engineered for automation environments, configuration management, and scenarios where data simplicity and rich, file-level metadata are critical. While 104 emphasized strict uniformity across all documents, 104a introduces a strategic trade-off: increased flexibility in metadata coupled with a stringent restriction on data complexity.
Version 104a is defined by two key differences from its predecessor, 104, making it uniquely suited for automated systems:
Allowance of Custom Top-Level Headers (Metadata): Unlike BEJSON 104, which forbids any top-level keys beyond the six mandatory system keys, 104a explicitly permits the inclusion of custom top-level keys. These keys allow creators to embed application-specific or domain-specific metadata directly into the schema structure (e.g., Server_ID, Deployment_Environment, or Log_Retention_Policy). This feature transforms the BEJSON file into a highly self-describing configuration or data package, providing essential context without cluttering the record data itself.
Restriction to Primitive Types Only: To ensure maximum parsing speed and simplicity for automated systems, BEJSON 104a strictly limits the field definitions in the Fields array to primitive types: string, integer, number, and boolean. Complex types such as array and object are explicitly disallowed. This constraint forces the data into a flat, easily consumable structure, eliminating the overhead and complexity associated with parsing nested hierarchies.
BEJSON 104a in Automation Scenarios
This combination of primitive-only data and custom metadata is perfectly tailored for automation tasks, such as scheduled processes, configuration file management, and system health logging. Automation scripts often require data that is simple, predictable, and fast to process. For instance, a scheduled task runner needs to quickly read hundreds of configuration entries or log outputs. The flat structure of 104a ensures that data can be read and interpreted in a single pass without recursive parsing logic. Meanwhile, the custom headers provide immediate, machine-readable context (e.g., identifying the source server or the specific automation workflow that generated the file), which is crucial for monitoring, auditing, and routing the data within a larger system architecture.
The following schema illustrates a basic health status report, demonstrating the use of a custom top-level header and strict adherence to primitive field types.
SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104a",
"Format_Creator": "Elton Boehnen",
"Automation_Focus": "System Health Checks",
"Records_Type": [
"HealthStatus"
],
"Fields": [
{
"name": "check_id",
"type": "string"
},
{
"name": "system_name",
"type": "string"
},
{
"name": "status_code",
"type": "integer"
},
{
"name": "status_message",
"type": "string"
},
{
"name": "last_check_time",
"type": "string"
},
{
"name": "is_critical",
"type": "boolean"
}
],
"Values": [
[
"HC001",
"Database Server Prod",
200,
"Operational",
"2024-11-10T12:00:00Z",
false
],
[
"HC002",
"API Gateway 1",
503,
"Service Unavailable",
"2024-11-10T11:58:00Z",
true
],
[
"HC003",
"Log Processor",
200,
"Processing queue nominal",
"2024-11-10T12:01:00Z",
false
]
]
}
Chapter 5: Practical Applications of BEJSON 104a
BEJSON Format Version 104a is strategically designed to maximize efficiency and contextual awareness in systems that rely on simple, flat data structures. By enforcing primitive-only data types, 104a ensures rapid parsing, making it ideal for high-volume, automated processes. Simultaneously, its unique allowance for custom top-level headers transforms the file into a highly self-describing entity, embedding critical context directly where it is needed.
This chapter explores two primary use cases where 104a excels: managing application configuration files and standardizing automation script execution logs. In both scenarios, the ability to separate file-level metadata from record-level data is a significant advantage.
Use Case 1: Configuration Files
Configuration files often consist of simple lists of key-value pairs (e.g., database connection strings, feature flags, timeout values). These settings rarely require complex nested arrays or objects. BEJSON 104a provides a structured, schema-validated alternative to traditional INI or YAML files, ensuring that all configuration parameters adhere to defined types (string, integer, boolean).
Embedding Rich Context with Custom Headers:
For configuration, custom headers are invaluable for defining the environment and application scope. Instead of relying on the file path or external system variables, headers like Application_Name, Deployment_Region, or Environment_Status provide immediate, machine-readable context about the configuration file itself. A parser can instantly determine if the file applies to the 'Staging' environment or the 'US-East' region before processing a single configuration record.
Use Case 2: Automation Script Execution Logs
Automation script logs require speed and consistency. When thousands of scripts run daily, the log format must be simple enough for quick aggregation and analysis. Since log entries are fundamentally flat records (timestamp, status, duration, message), the primitive-only constraint of 104a is perfectly suited.
The Role of Custom Headers in Logging:
In logging, custom headers are essential for traceability and auditing. By embedding headers such as Source_System, Batch_ID, or Chapter_Title (for documentation purposes), the log file becomes self-aware. This context is vital for log processing pipelines, enabling them to route, filter, and prioritize logs based on high-level metadata without needing to inspect every individual record. For example, a log aggregator can filter all records originating from the Automation_Engine_A system running in the Production environment simply by reading the top-level headers.
Schema Demonstration: Contextual Execution Logs
The following schema demonstrates the practical application of BEJSON 104a for execution logs. Note how the custom headers (Chapter_Title, Source_System, Processing_Environment) provide rich context about the entire file, while the Fields array strictly defines the simple, primitive data structure of each log record.
SCHEMA:
{
"Format": "BEJSON",
"Format_Version": "104a",
"Format_Creator": "Elton Boehnen",
"Chapter_Title": "Practical Applications of BEJSON 104a",
"Source_System": "Automation_Engine_A",
"Processing_Environment": "Production",
"Records_Type": [
"ExecutionLogEntry"
],
"Fields": [
{
"name": "log_id",
"type": "string"
},
{
"name": "task_id",
"type": "string"
},
{
"name": "execution_timestamp",
"type": "string"
},
{
"name": "status",
"type": "string"
},
{
"name": "duration_seconds",
"type": "number"
},
{
"name": "triggered_by",
"type": "string"
},
{
"name": "is_error",
"type": "boolean"
}
],
"Values": [
[
"EL006",
"ST005",
"2024-11-10T03:00:15Z",
"SUCCESS",
15.7,
"Scheduler",
false
],
[
"EL007",
"ST006",
"2024-11-10T04:30:00Z",
"FAILURE",
0.5,
"Manual",
true
],
[
"EL008",
"ST005",
"2024-11-11T03:00:18Z",
"SUCCESS",
16.2,
"Scheduler",
false
]
]
}
Chapter 6: Designing Custom Headers and Metrics in BEJSON 104a
BEJSON Format Version 104a distinguishes itself primarily through its support for custom top-level keys, often referred to as custom headers. This feature provides an unparalleled degree of flexibility for embedding application-specific metadata directly into the BEJSON document, without altering the core data structure or requiring external configuration. This capability is invaluable for creating self-contained data packages that carry their own contextual information, significantly improving data governance, traceability, and ease of integration into diverse systems.
Best Practices for Custom Header Design
When utilizing the custom header feature in 104a, adhering to naming conventions is crucial for maintaining readability and avoiding system conflicts:
Use PascalCase for Naming: Custom headers should employ PascalCase (e.g., Client_ID, Data_Source_Timestamp). This convention aligns with the naming style of the mandatory BEJSON system keys (e.g., Format_Version, Records_Type), ensuring visual and programmatic consistency across the document.
Avoid Reserved Key Collisions: Custom headers must not reuse the names of the six mandatory BEJSON top-level keys (Format, Format_Version, Format_Creator, Records_Type, Fields, Values). Using a clear namespace or prefix (like X- or an underscore, though PascalCase is preferred) can further mitigate the risk of collision with future BEJSON standard keys.
Scope to File-Level Metadata: Custom headers should strictly contain metadata about the document itself (e.g., the server that generated the data, the batch ID, the collection interval). Data that changes per record must be defined as fields within the Fields array.
Modeling Resource Utilization Metrics with Primitive Constraints
Since BEJSON 104a strictly prohibits complex types (array and object), modeling complex data like time-series resource utilization metrics (CPU, Memory) requires a flattened approach. Instead of grouping related metrics into an object or capturing multiple readings in an array, each individual metric reading must be represented as a separa