AI prompts define the instructions that drive automatic description generation and diagram creation in Dawiso. Each prompt is a template with access to object metadata, relationships, and hierarchy — resolved at runtime against live data.
Schema
The aiPrompts schema is minimal. Each prompt has exactly two properties:
| Property | Type | Required | Purpose |
|---|---|---|---|
key | string | Yes | Unique identifier for the prompt |
promptTemplate | string | Yes | Complete AI instruction template with expression placeholders |
No additional properties are allowed. The schema enforces additionalProperties: false.
{
"aiPrompts": [
{
"key": "core_snowflake_table_core_description",
"promptTemplate": "You are a data governance expert creating concise, business-focused descriptions..."
}
]
}
Key naming convention
The standard library uses a four-part naming pattern:
core_{technology}_{objectType}_{attributeType}
| Segment | Purpose | Examples |
|---|---|---|
core | Package prefix | Always core for standard prompts |
{technology} | Platform identifier | snowflake, power_bi, google_bigquery, dbt, keboola |
{objectType} | Target object type | table, view, dataset, workspace, report |
{attributeType} | Target attribute | core_description, core_summary |
Examples from the standard library:
core_snowflake_table_core_descriptioncore_power_bi_dataset_core_descriptioncore_google_bigquery_view_core_descriptioncore_dbt_model_core_description
Custom packages can use any key format as long as it is unique.
Template expression language
Prompt templates contain expressions that resolve against the current object’s metadata at runtime. The expression syntax uses double braces with method calls on currentObject.
Core expressions
| Expression | Returns | Purpose |
|---|---|---|
{{currentObject}.name()} | string | Object display name |
{{currentObject}.attribute('key')} | string | Value of an attribute on the object |
{{currentObject}.parent(1).name()} | string | Name of the immediate parent object |
{{currentObject}.parent(2).name()} | string | Name of the grandparent object |
{{currentObject}.parent(N).name()} | string | Name of the Nth ancestor in the hierarchy |
Relationship expressions
| Expression | Returns | Purpose |
|---|---|---|
{{currentObject}.relations('relationTypeKey')} | collection | Objects linked via a relation type |
{{currentObject}.children('objectTypeKey')} | collection | Child objects of a specific type |
Collection iteration
Collections from relations() and children() support forEach for enumeration:
{{currentObject}.relations('core_code_lists_has_area').forEach(x => appendLine('- {0}', x.name()))}
The appendLine function takes a format string and substitution values. Inside the loop, x provides the same API as currentObject:
x.name()— linked object namex.attribute('key')— linked object attribute value
Chaining parent and attribute calls
Expressions can chain multiple levels of hierarchy and attribute access:
{{currentObject}.parent(1).name()} // Schema name
{{currentObject}.parent(2).name()} // Database name
{{currentObject}.attribute('core_snowflake_ddl')} // DDL statement
{{currentObject}.attribute('core_description_scanned')} // Scanned description from connector
Linking prompts to object types
AI prompts are not standalone — they must be linked to an object type attribute through the ai_automatic_generated_by feature. This feature tells the AI attribute generation pipeline which prompt to use for a specific attribute on a specific object type.
The feature is set on an attribute type within an object type definition:
{
"key": "my_object_type",
"attributeTypes": [
{
"key": "core_description",
"features": [
{
"key": "ai_automatic_generated_by",
"value": "core_snowflake_table_core_description"
}
]
}
]
}
The value must match an existing aiPrompts[].key exactly. See Attribute Types & Features for the complete feature catalog.
Every AI-generated attribute needs two things: the prompt defined in aiPrompts[] and the ai_automatic_generated_by feature on the attribute type linking to that prompt key. Missing either one disables AI generation for that attribute.
Diagram generation prompts
The loader processes prompts from two separate sources:
assets.aiPrompts[]— explicit prompt definitions (the primary source)assets.graphMetamodels[].diagramGenerationPrompt— inline prompt on graph metamodels
For metamodel prompts, the loader generates the key automatically using the pattern diagram_generation.{metamodelKey}. These prompts drive the Graph Metamodels diagram generation feature.
{
"graphMetamodels": [
{
"key": "my_lineage",
"diagramGenerationPrompt": "Generate a data lineage diagram showing..."
}
]
}
This creates a prompt with key diagram_generation.my_lineage in the database. The DiagramGenerationService reads it through a foreign key relationship on the metamodel entity.
Loader behavior
The AI_C_PROMPT loader executes at position 85 in the installation pipeline — after search object type mappings (84) and before graph metamodels (86).
| Aspect | Behavior |
|---|---|
| Loader enum | AI_C_PROMPT (position 85) |
| Uniqueness key | EnumKey (the prompt key) |
| Validation | None — the loader performs no validation |
| Upsert logic | Compares by EnumKey, creates or updates PromptTemplate |
| Cache | Case-insensitive dictionary keyed by EnumKey |
| Database | ML.C_Prompt table (Prompt_Id, Enum_Key, Prompt_Template) |
The loader collects prompts from both aiPrompts[] and graphMetamodels[].diagramGenerationPrompt, merges them into a single collection, and processes them as a batch.
Runtime: how prompts execute
AI prompts are consumed by two runtime systems:
1. AI attribute generation
The AiAttributeGenerationOperation worker reads the ai_automatic_generated_by feature from attribute types. For each object instance, it:
- Looks up the prompt by
EnumKeyfrom the cache - Resolves template expressions against the object’s live metadata
- Sends the resolved prompt to the AI model
- Writes the generated text back as the attribute value
This runs as a background worker operation, not during package installation.
2. Diagram generation
The DiagramGenerationService reads the prompt template through the metamodel’s foreign key to CPrompt. It replaces {metamodelId} with the actual metamodel ID and sends the prompt to the AI model to generate diagram layouts.
Prompt structure patterns
The 114 prompts in the standard library follow a consistent structure:
Standard structure
1. Role statement
"You are a data governance expert creating concise, business-focused descriptions..."
2. Anti-hallucination guardrails
"You MUST base your description ONLY on the information provided below..."
"Do NOT invent specific systems, tools, teams, or processes..."
3. Metadata section (with template expressions)
**Core Information:**
- Name: {{currentObject}.name()}
- Schema: {{currentObject}.parent(1).name()}
- Description: {{currentObject}.attribute('core_description_scanned')}
**Classification:**
Business Areas: {{currentObject}.relations('core_code_lists_has_area')...}
Labels: {{currentObject}.relations('core_code_lists_has_label')...}
**Lineage:**
Data Sources: {{currentObject}.relations('core_dataSource')...}
Consumers: {{currentObject}.relations('core_datasource')...}
4. Output format specification
"Generate a SINGLE-PARAGRAPH HTML description. No headings. No lists.
2–4 sentences (max 100 words)."
5. Constraints
- No bullet points in output
- No object enumeration
- No superlatives or speculation
Output format by complexity
| Object complexity | Format | Length | Sentences |
|---|---|---|---|
| Simple (columns, fields) | Single <p> | ~100 words | 2–4 |
| Medium (tables, views, datasets) | Single <p> or <h3> sections | ~150 words | 6–10 |
| Complex (schemas, workspaces, projects) | Multi-section with <h3> | 150–200 words | 8–12 |
Common metadata references
Most prompts reference these core attributes and relations:
| Category | Typical expressions |
|---|---|
| Identity | name(), parent(1).name(), parent(2).name() |
| Description | attribute('core_description_scanned') |
| Classification | relations('core_code_lists_has_area'), relations('core_code_lists_has_label') |
| Lineage | relations('core_dataSource'), relations('core_datasource') |
| Documentation | relations('core_described_by') |
| Data products | relations('core_data_product_contains') |
Gotchas
Zero loader validation. The AI_C_PROMPT loader performs no validation on prompt content. A template with invalid expressions (missing closing braces, nonexistent attribute keys, misspelled relation types) installs without error. The failure surfaces only at runtime when the AI attribute generation worker tries to resolve the expressions. Test prompts against real objects before deploying.
Dual prompt source merging. The loader collects prompts from both assets.aiPrompts[] and assets.graphMetamodels[].diagramGenerationPrompt. If a metamodel defines a diagramGenerationPrompt, the loader auto-generates a key diagram_generation.{metamodelKey}. A manually defined prompt in aiPrompts[] with the same key would conflict. Avoid creating prompt keys that start with diagram_generation..
Cache is case-insensitive, database lookup depends on collation. The prompt cache dictionary uses StringComparer.OrdinalIgnoreCase, so Core_Snowflake_Table and core_snowflake_table resolve to the same cache entry during package loading. The AiCPromptService.GetByEnumKey() method delegates to Entity Framework, where case sensitivity depends on the database collation. Use consistent lowercase keys to avoid ambiguity.
Missing ai_automatic_generated_by feature. Defining a prompt in aiPrompts[] without linking it to an attribute type via the ai_automatic_generated_by feature means the prompt exists in the database but is never used. No warning is logged. Verify that every prompt has a corresponding feature on at least one object type attribute.
Referenced attributes must exist. Template expressions like {{currentObject}.attribute('my_custom_attr')} resolve against the object’s actual attributes at runtime. If the attribute key does not exist on the object type, the expression returns empty. The prompt still executes, but the AI model receives incomplete context, producing lower-quality output. Cross-reference all attribute keys in the template against the object type definition.
Complete example
A connector package with one AI prompt for table descriptions:
{
"aiPrompts": [
{
"key": "core_my_connector_table_core_description",
"promptTemplate": "You are a data governance expert creating concise, business-focused descriptions for database tables in an enterprise data catalog.\n\nYou MUST base your description ONLY on the information provided below. Do NOT invent specific systems, tools, teams, or processes that are not clearly implied by this information.\n\n**Core Table Information:**\n- Table name: {{currentObject}.name()}\n- Schema: {{currentObject}.parent(1).name()}\n- Database: {{currentObject}.parent(2).name()}\n- Scanned description: {{currentObject}.attribute('core_description_scanned')}\n- Owner: {{currentObject}.attribute('owner')}\n- Row count: {{currentObject}.attribute('row_count')}\n\n**Classification:**\nBusiness Areas: {{currentObject}.relations('core_code_lists_has_area').forEach(x => appendLine('- {0}', x.name()))}\nLabels: {{currentObject}.relations('core_code_lists_has_label').forEach(x => appendLine('- {0}', x.name()))}\n\n**Lineage:**\nData Sources: {{currentObject}.relations('core_dataSource').forEach(x => appendLine('- {0}', x.name()))}\nConsumers: {{currentObject}.relations('core_datasource').forEach(x => appendLine('- {0}', x.name()))}\n\n**Columns:**\n{{currentObject}.children('column').forEach(x => appendLine('- {0}', x.name()))}\n\n---\n\nGenerate a SINGLE-PARAGRAPH HTML description using only `<p>` tags. No headings, no lists, no tables. 2-4 sentences, max 100 words. Be conservative and factual."
}
]
}
To activate this prompt, add the ai_automatic_generated_by feature to the core_description attribute on the table object type:
{
"objectTypes": [
{
"key": "table",
"attributeTypes": [
{
"key": "core_description",
"features": [
{
"key": "ai_automatic_generated_by",
"value": "core_my_connector_table_core_description"
}
]
}
]
}
]
}
The AI attribute generation worker picks up objects of type table, resolves the template expressions against each instance’s live metadata, and writes the generated description into the core_description attribute.