From 38484613d00536b0c05e8d3602130f6db449deb5 Mon Sep 17 00:00:00 2001 From: gowtham raj j Date: Mon, 4 May 2026 21:12:16 +0530 Subject: [PATCH 1/2] v1.3.7: (#33) --- .tagignore | 16 --------- AGENTS.md | 6 ++-- CHANGELOG.md | 7 ++++ CONTRIBUTING.md | 4 +-- DEVELOPMENT_SETUP.md | 6 ++-- NOTICE | 6 ++-- README.md | 24 ++++++------- ROADMAP.md | 2 +- docs/LINEAGE.md | 4 +-- docs/SETTINGS.md | 8 ++--- docs/TUTORIAL.md | 34 +++++++++---------- docs/VISUAL_EDITOR.md | 6 ++-- docs/integrations/dbt-integration.md | 4 +-- docs/integrations/lightdash-integration.md | 2 +- docs/models/README.md | 4 +-- docs/setup/README.md | 2 +- docs/setup/lightdash-configuration.md | 4 +-- docs/setup/lightdash-local-setup.md | 16 ++++----- docs/setup/setup.md | 18 +++++----- docs/setup/trino-local-setup.md | 2 +- package-lock.json | 6 ++-- package.json | 26 +++++++------- .../model.incremental_strategy.schema.json | 34 +++++++++++++------ src/extension.ts | 2 +- src/services/__tests__/agent.test.ts | 2 +- src/services/constants.ts | 2 +- src/services/framework/index.ts | 4 +-- src/services/types/config.ts | 2 +- .../model.incremental_strategy.schema.d.ts | 2 +- .../types/model.materialization.schema.d.ts | 2 +- src/shared/schema/types/model.schema.d.ts | 2 +- .../model.type.int_join_column.schema.d.ts | 2 +- .../model.type.int_join_models.schema.d.ts | 2 +- .../model.type.int_lookback_model.schema.d.ts | 2 +- .../model.type.int_rollup_model.schema.d.ts | 2 +- .../model.type.int_select_model.schema.d.ts | 2 +- .../model.type.int_union_models.schema.d.ts | 2 +- .../model.type.stg_select_model.schema.d.ts | 2 +- .../model.type.stg_select_source.schema.d.ts | 2 +- .../model.type.stg_union_sources.schema.d.ts | 2 +- src/shared/web/constants.ts | 2 +- templates/_AGENTS.md | 16 ++++----- .../skills/dj-create-new-model/_SKILL.md | 2 +- tests/README.md | 8 ++--- web/package.json | 2 +- 45 files changed, 155 insertions(+), 152 deletions(-) delete mode 100644 .tagignore diff --git a/.tagignore b/.tagignore deleted file mode 100644 index 8419a51..0000000 --- a/.tagignore +++ /dev/null @@ -1,16 +0,0 @@ -# Files/directories to ignore when determining if a tag should be created -docs/ -.github/ -.prettierignore -.prettierrc -.eslintrc.json -.eslintcache -.gitignore -.vscodeignore -DEVELOPMENT_SETUP.md -CONTRIBUTING.md -CODE_OF_CONDUCT.md -LICENSE.md -NOTICE -Makefile -.tagignore diff --git a/AGENTS.md b/AGENTS.md index 17d0c0b..3a84fed 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -4,7 +4,7 @@ This file provides guidance to AI coding agents working with code in this reposi ## Project Overview -DJ (dbt-json) Framework is a VS Code extension that revolutionizes dbt development through a structured, JSON-first approach. Users define dbt models and sources as validated `.model.json` and `.source.json` files that automatically generate corresponding SQL and YAML configurations. +DJ (Data JSON) Framework is a VS Code extension that revolutionizes dbt development through a structured, JSON-first approach. Users define dbt models and sources as validated `.model.json` and `.source.json` files that automatically generate corresponding SQL and YAML configurations. The extension provides a rich visual UI built with React, including interactive model and column lineage graphs, a visual model creation wizard, query result previews, and a data modeling canvas -- all rendered as VS Code webviews. @@ -280,7 +280,7 @@ All API message types defined in `src/shared/api/types.ts` with full TypeScript ## Project Structure ```text -vscode-dbt-json/ +dj/ ├── src/ │ ├── extension.ts # Entry point - activates Coder service │ ├── admin.ts # Platform utilities (paths, process runners) @@ -671,7 +671,7 @@ LIGHTDASH_TRINO_HOST=host.docker.internal # Trino host override for Docker The extension includes built-in support for AI coding agents to help users create DJ-compliant dbt models. -**Project-level AGENTS.md generation:** When installed in a dbt project, the extension generates an `AGENTS.md` file at `.agents/dj/AGENTS.md` in the workspace root. This file contains DJ framework-specific instructions (model types, JSON schema structure, naming conventions, column definitions) that users can reference in their LLM workflows to generate valid `.model.json` and `.source.json` files. The generated file is tailored to the project's configuration and available models/sources. +**Project-level AGENTS.md generation:** When installed in a dbt project, the extension generates an `AGENTS.md` file at `.agents/dj/AGENTS.md` in the workspace root. This file contains DJ (Data JSON) Framework-specific instructions (model types, JSON schema structure, naming conventions, column definitions) that users can reference in their LLM workflows to generate valid `.model.json` and `.source.json` files. The generated file is tailored to the project's configuration and available models/sources. **Skill files (`.agents/skills/`):** The extension writes agent-agnostic skill directories to the workspace root's `.agents/skills/` directory, following the [Agent Skills](https://agentskills.io) open standard. Each skill is a subdirectory containing a `SKILL.md` file with YAML frontmatter (`name` and `description`) and markdown instructions (e.g., `.agents/skills/dj-create-new-model/SKILL.md`). Skill templates are bundled with the extension in `templates/skills/` and copied to the workspace at activation time. diff --git a/CHANGELOG.md b/CHANGELOG.md index 169555d..fd9faf2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,12 @@ # Change Log +## 1.3.7 + +### Naming alignment + +- Renamed framework to **DJ (Data JSON) Framework** to better reflect its JSON-first, schema-driven approach +- Updated repository URL from `Workday/vscode-dbt-json` to `Workday/dj` + ## 1.3.6 - **CTE exclude/include flags now mirror their main-model counterparts and inherit from the model** — a CTE accepts `exclude_date_filter`, `exclude_daily_filter`, `exclude_portal_partition_columns`, `exclude_portal_source_count`, and `include_full_month` with the same semantics as the corresponding main-model flags. Resolution is uniform: CTE override > model value > false. Set `exclude_portal_partition_columns: true` on the model to skip partition auto-injection in every CTE without per-CTE repetition; set it on a single CTE to override only that CTE. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 208cff1..f04b3b9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,6 +1,6 @@ -# Contributing to Workday DJ (dbt-json framework) +# Contributing to Workday DJ (Data JSON) Framework -Thank you for your interest in contributing to the Workday DJ VS Code extension! This document provides guidelines and information for contributors. +Thank you for your interest in contributing to the Workday DJ (Data JSON) Framework VS Code extension! This document provides guidelines and information for contributors. ## Table of Contents diff --git a/DEVELOPMENT_SETUP.md b/DEVELOPMENT_SETUP.md index b8604f6..d39cd0d 100644 --- a/DEVELOPMENT_SETUP.md +++ b/DEVELOPMENT_SETUP.md @@ -1,6 +1,6 @@ # Development Setup -This guide covers setting up your development environment for contributing to the DJ (dbt-json) VS Code extension. +This guide covers setting up your development environment for contributing to the DJ (Data JSON) Framework VS Code extension. ## Table of Contents @@ -32,8 +32,8 @@ For development, you'll also need: 1. **Clone and install dependencies:** ```bash -git clone https://github.com/Workday/vscode-dbt-json.git -cd vscode-dbt-json +git clone https://github.com/Workday/dj.git +cd dj npm install ``` diff --git a/NOTICE b/NOTICE index 754b44d..21c769e 100644 --- a/NOTICE +++ b/NOTICE @@ -1,6 +1,6 @@ # NOTICE -Workday DJ (dbt-json framework) VS Code Extension +Workday DJ (Data JSON) Framework VS Code Extension Copyright (c) 2025 Workday, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at @@ -16,8 +16,8 @@ Licensed under the Apache License, Version 2.0 (the "License"); you may not use For questions about this project, please use: -- **GitHub Issues**: https://github.com/Workday/vscode-dbt-json/issues -- **GitHub Discussions**: https://github.com/Workday/vscode-dbt-json/discussions +- **GitHub Issues**: https://github.com/Workday/dj/issues +- **GitHub Discussions**: https://github.com/Workday/dj/discussions --- diff --git a/README.md b/README.md index 6345c31..8ce75e0 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,9 @@ -# DJ (dbt-json) Framework +# DJ (Data JSON) Framework [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) -[![GitHub Release](https://img.shields.io/github/v/release/Workday/vscode-dbt-json)](https://github.com/Workday/vscode-dbt-json/releases) +[![GitHub Release](https://img.shields.io/github/v/release/Workday/dj)](https://github.com/Workday/dj/releases) [![dbt](https://img.shields.io/badge/dbt-Core-orange.svg)](https://www.getdbt.com/) -[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/Workday/vscode-dbt-json/badge)](https://scorecard.dev/viewer/?uri=github.com/Workday/vscode-dbt-json) +[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/Workday/dj/badge)](https://scorecard.dev/viewer/?uri=github.com/Workday/dj) DJ is a VS Code extension that revolutionizes dbt development through a structured, JSON-first approach. Define your dbt models and sources as validated `.model.json` and `.source.json` files that automatically generate corresponding SQL and YAML configurations. @@ -15,7 +15,7 @@ DJ is a VS Code extension that revolutionizes dbt development through a structur - **Integrated BI Preview** - One-click Lightdash preview from your models @@ -35,7 +35,7 @@ DJ is a VS Code extension that revolutionizes dbt development through a structur -![DBT Stack](https://github.com/Workday/vscode-dbt-json/blob/main/assets/images/dj_stack.png?raw=true) +![DBT Stack](https://github.com/Workday/dj/blob/main/assets/images/dj_stack.png?raw=true) _DJ integrates seamlessly with your modern data stack - from VS Code to dbt, Trino, and Lightdash._ @@ -115,14 +115,14 @@ dbt parse # this generates the manifest.json file ### 4. Install the Extension -1. Download the latest `.vsix` file from the [GitHub Releases page](https://github.com/Workday/vscode-dbt-json/releases) +1. Download the latest `.vsix` file from the [GitHub Releases page](https://github.com/Workday/dj/releases) 2. Install the extension in VS Code: - Open VS Code - Press `Cmd+Shift+P` (Mac) or `Ctrl+Shift+P` (Windows/Linux) - Type "Extensions: Install from VSIX..." - Select the downloaded `.vsix` file -Note: Want to build DJ extension from source? See the [DEVELOPMENT_SETUP.md](DEVELOPMENT_SETUP.md) for development setup. +Note: Want to build the DJ (Data JSON) Framework extension from source? See the [DEVELOPMENT_SETUP.md](DEVELOPMENT_SETUP.md) for development setup. ### 5. Configure VS Code Settings @@ -131,7 +131,7 @@ The extension can be configured in the VS Code settings or by adding to the `.vs To configure the extension via VS Code settings, - Open the VS Code settings (Cmd/Ctrl + ,) -- Under "Extensions", select "DJ (dbt-json) Framework". +- Under "Extensions", select "DJ (Data JSON) Framework". - Configure the extension settings as needed. To configure the extension via `.vscode/settings.json`, add the configuration options as needed to the file: @@ -183,7 +183,7 @@ See the [Settings Reference](docs/SETTINGS.md#when-settings-take-effect) for com ### 6. Create Your First Source -1. Look for the DJ extension panel in the sidebar and click on it. +1. Look for the DJ (Data JSON) Framework extension panel in the sidebar and click on it. 2. Under "Actions", click on "Create Source". 3. Fill the form: - **Select Project**: Choose your dbt project @@ -205,7 +205,7 @@ DJ offers multiple ways to build models - choose what works best for you: ### 8. Create Your First Model -1. Look for the DJ extension panel in the sidebar and click on it. +1. Look for the DJ (Data JSON) Framework extension panel in the sidebar and click on it. 2. Under "Actions", click on "Create Model". 3. Fill the form: - **Select Project**: Choose your dbt project @@ -293,8 +293,8 @@ DJ supports the following model types: ## Support & Community -- **Discussions**: [GitHub Discussions](https://github.com/Workday/vscode-dbt-json/discussions) -- **Issues**: [GitHub Issues](https://github.com/Workday/vscode-dbt-json/issues) +- **Discussions**: [GitHub Discussions](https://github.com/Workday/dj/discussions) +- **Issues**: [GitHub Issues](https://github.com/Workday/dj/issues) - **Documentation**: [Complete Docs](docs/) ## Contributing diff --git a/ROADMAP.md b/ROADMAP.md index 4cd955e..1bf9e36 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -22,7 +22,7 @@ ## Suggestions -Suggestions are welcome! Please feel free to suggest features or report issues in the [GitHub Issues](https://github.com/Workday/vscode-dbt-json/issues). +Suggestions are welcome! Please feel free to suggest features or report issues in the [GitHub Issues](https://github.com/Workday/dj/issues). ## Contributing diff --git a/docs/LINEAGE.md b/docs/LINEAGE.md index d94b8f8..602091c 100644 --- a/docs/LINEAGE.md +++ b/docs/LINEAGE.md @@ -27,7 +27,7 @@ The Data Explorer visualizes your dbt models as an interactive lineage graph, sh 2. Select "DJ: Data Explorer" @@ -199,7 +199,7 @@ The Data Explorer includes panels for viewing query results and compilation logs 2. Column Lineage opens for that model diff --git a/docs/SETTINGS.md b/docs/SETTINGS.md index 7df3216..941b126 100644 --- a/docs/SETTINGS.md +++ b/docs/SETTINGS.md @@ -1,6 +1,6 @@ -# DJ Extension Settings Reference +# DJ (Data JSON) Framework Settings Reference -Complete guide to configuring the DJ VS Code extension. +Complete guide to configuring the DJ (Data JSON) Framework VS Code extension. ## Quick Reference @@ -211,7 +211,7 @@ Complete guide to configuring the DJ VS Code extension. ``` - Options: `"debug"` | `"info"` | `"warn"` | `"error"` -- View logs: `View → Output → DJ Extension` +- View logs: `View → Output → DJ` - Use `debug` for troubleshooting --- @@ -251,7 +251,7 @@ Run this command (`Cmd/Ctrl+Shift+P` → `DJ: Refresh Projects`) after changing: 1. Check [When Settings Take Effect](#when-settings-take-effect) section 2. Run appropriate command (Refresh Projects or Sync) -3. Check Output panel (`View → Output → DJ Extension`) for validation errors +3. Check Output panel (`View → Output → DJ`) for validation errors ### Path Validation Errors? diff --git a/docs/TUTORIAL.md b/docs/TUTORIAL.md index 6c758cc..ef30986 100644 --- a/docs/TUTORIAL.md +++ b/docs/TUTORIAL.md @@ -28,7 +28,7 @@ Interactive guided mode that walks you through creating specific model types: 6. Tutorial automatically fills in example data and advances through steps @@ -106,7 +106,7 @@ dbt seed dbt parse ``` -This creates `target/manifest.json` that the DJ extension needs to provide IntelliSense and validation. +This creates `target/manifest.json` that the DJ (Data JSON) Framework needs to provide IntelliSense and validation. Now you have real data and the extension is ready to work! @@ -114,9 +114,9 @@ Now you have real data and the extension is ready to work! Before creating models, we need to define sources for our raw data. -1. **Use the DJ extension** to create sources incrementally: +1. **Use the DJ (Data JSON) Framework extension** to create sources incrementally: - - Look for the DJ extension panel in the sidebar and click on it. + - Look for the DJ (Data JSON) Framework extension panel in the sidebar and click on it. - Under "Actions", click on "Create Source". - In the extension UI, fill the form: - **Select Project**: `jaffle_shop` @@ -153,7 +153,7 @@ Now the extension can provide IntelliSense for your sources! ## Step 3: Configure Groups for Model Organization -Before creating models, we need to define groups that will be available in the DJ extension UI. +Before creating models, we need to define groups that will be available in the DJ (Data JSON) Framework extension UI. 1. **Business-focused groups** in the example project: @@ -208,15 +208,15 @@ Before creating models, we need to define groups that will be available in the D dbt parse ``` -> **Important**: Without groups configured and `dbt parse` run, the DJ extension UI won't populate the Group dropdown, and you cannot create models. +> **Important**: Without groups configured and `dbt parse` run, the DJ (Data JSON) Framework extension UI won't populate the Group dropdown, and you cannot create models. ## Step 4: Create Your First Staging Model -Let's clean up the raw customer data using the DJ extension's UI. +Let's clean up the raw customer data using the DJ (Data JSON) Framework extension's UI. -1. **Use the DJ extension** to create a new model: +1. **Use the DJ (Data JSON) Framework extension** to create a new model: - - Use the DJ extension UI (available through the extension panel) + - Use the DJ (Data JSON) Framework extension UI (available through the extension panel) - The extension UI will show: - **Select Project**: `jaffle_shop` - **Select Model Type**: `Staging Select Source` @@ -264,7 +264,7 @@ Let's clean up the raw customer data using the DJ extension's UI. dbt run --select stg__customers__profiles__clean ``` -**What happened?** The DJ extension: +**What happened?** The DJ (Data JSON) Framework: - Used the manifest.json to provide IntelliSense for available sources - Showed `development__jaffle_shop_dev_seeds.raw_customers` as an available source option @@ -278,9 +278,9 @@ Let's clean up the raw customer data using the DJ extension's UI. Now let's calculate customer order summaries. -1. **Use the DJ extension** to create an intermediate model: +1. **Use the DJ (Data JSON) Framework extension** to create an intermediate model: - - Look for the DJ extension panel in the sidebar and click on it. + - Look for the DJ (Data JSON) Framework extension panel in the sidebar and click on it. - Under "Actions", click on "Create Model". - In the extension UI, fill the form: - **Select Project**: `jaffle_shop` @@ -349,7 +349,7 @@ Now let's calculate customer order summaries. dbt run --select int__sales__orders__enriched ``` -**What happened?** The DJ extension: +**What happened?** The DJ (Data JSON) Framework: - Used manifest.json to provide IntelliSense for available models - Generated SQL with proper JOIN syntax and column selection @@ -362,9 +362,9 @@ dbt run --select int__sales__orders__enriched Finally, let's create a business-friendly customer analytics table. -1. **Use the DJ extension** to create a mart model: +1. **Use the DJ (Data JSON) Framework extension** to create a mart model: - - Look for the DJ extension panel in the sidebar and click on it. + - Look for the DJ (Data JSON) Framework extension panel in the sidebar and click on it. - Under "Actions", click on "Create Model". - In the extension UI, fill the form: - **Select Project**: `jaffle_shop` @@ -426,7 +426,7 @@ Finally, let's create a business-friendly customer analytics table. dbt run --select mart__sales__reporting__revenue ``` -**What happened?** The DJ extension: +**What happened?** The DJ (Data JSON) Framework: - Provided IntelliSense for available intermediate models - Validated your mart configuration against the schema @@ -470,7 +470,7 @@ Browse the **[Model Types Documentation](models/README.md)** for: ### Advanced Features -- **UI-driven model creation**: Use the DJ extension UI for guided setup +- **UI-driven model creation**: Use the DJ (Data JSON) Framework extension UI for guided setup - **IntelliSense**: Get autocomplete for models, sources, and columns from manifest.json - **Schema validation**: Real-time validation against JSON schemas - **Trino integration**: Browse data catalog and get column suggestions diff --git a/docs/VISUAL_EDITOR.md b/docs/VISUAL_EDITOR.md index 5b90f28..f925ab3 100644 --- a/docs/VISUAL_EDITOR.md +++ b/docs/VISUAL_EDITOR.md @@ -5,7 +5,7 @@ The Data Modeling Visual Editor provides an interactive, node-based interface for creating and editing dbt models. It offers a visual alternative to JSON configuration while generating identical outputs. @@ -433,7 +433,7 @@ See [Lightdash Integration](integrations/lightdash-integration.md) for complete ### Creating a Join Model @@ -480,7 +480,7 @@ DJ supports two types of time-series analysis models: **Rollup** and **Lookback* #### Example: Rollup Model (Time Bucketing) diff --git a/docs/integrations/dbt-integration.md b/docs/integrations/dbt-integration.md index 15c45de..a8e6d29 100644 --- a/docs/integrations/dbt-integration.md +++ b/docs/integrations/dbt-integration.md @@ -45,7 +45,7 @@ Compiles model to SQL without executing: 2. Select "Compile Model" @@ -73,7 +73,7 @@ Opens an interactive UI for running dbt models with configurable options. 2. Click "Run Model" item diff --git a/docs/integrations/lightdash-integration.md b/docs/integrations/lightdash-integration.md index 21ed44e..8fb2377 100644 --- a/docs/integrations/lightdash-integration.md +++ b/docs/integrations/lightdash-integration.md @@ -41,7 +41,7 @@ Opens an interactive UI for managing Lightdash preview servers. 2. Click "Lightdash Preview" item diff --git a/docs/models/README.md b/docs/models/README.md index e3a2268..de8059f 100644 --- a/docs/models/README.md +++ b/docs/models/README.md @@ -433,7 +433,7 @@ Set `materialization.strategy.type` to one of the following (or rely on the exte | `append` | Inserts new rows with no de-duplication. Fastest. | Upstream must guarantee no duplicates in the new slice. | | `delete+insert` | Partition-safe upsert. Safe default. | `unique_key` auto-derived from partition columns when omitted. Works on Delta Lake, Hive, and Iceberg. | | `merge` | Row-level upsert on `unique_key`. | **dbt-trino requires Iceberg format** on the target table. On Delta Lake / Hive use `delete+insert` instead. | -| `overwrite_existing_partitions` | Drops and rewrites only the partitions present in the new slice. `unique_key` is not applicable, the consumer macro derives the partition list from the new slice itself, so the schema rejects `unique_key` on this strategy. | **Requires a custom dbt macro in your project** (e.g. `get_incremental_overwrite_existing_partitions_sql`). The DJ extension does NOT ship this macro and dbt-trino does NOT provide it natively. If your project does not define it, use `delete+insert` with a partition column as `unique_key`, it produces equivalent behavior for daily/monthly partitioned models. | +| `overwrite_existing_partitions` | Drops and rewrites only the partitions present in the new slice. `unique_key` is not applicable, the consumer macro derives the partition list from the new slice itself, so the schema rejects `unique_key` on this strategy. | **Requires a custom dbt macro in your project** (e.g. `get_incremental_overwrite_existing_partitions_sql`). The DJ (Data JSON) Framework does NOT ship this macro and dbt-trino does NOT provide it natively. If your project does not define it, use `delete+insert` with a partition column as `unique_key`, it produces equivalent behavior for daily/monthly partitioned models. | | `dj_iceberg_partition_overwrite` | Drops and rewrites only the partitions present in the new slice on **Iceberg** tables. `unique_key` is not applicable, the macro derives the partition list from the new slice itself by reading `properties.partitioning`, so the schema rejects `unique_key`. | **Shipped by DJ.** The dispatch macro `get_incremental_dj_iceberg_partition_overwrite_sql` lives in `macros/strategies.sql` and is auto-copied to `/macros/_ext_/strategies.sql` on **DJ: Refresh Projects**. **Requires Iceberg format** on the target table, set `materialization.format: "iceberg"` or the project var `storage_type: iceberg`. DJ flags non-Iceberg use in the Problems tab. On Delta Lake / Hive use `delete+insert` instead. | ### Data Quality @@ -449,4 +449,4 @@ Set `materialization.strategy.type` to one of the following (or rely on the exte --- -_This documentation provides comprehensive coverage of all dbt model types supported by the DJ extension. Each model type includes detailed examples, best practices, and integration guidance to help you build effective data pipelines._ +_This documentation provides comprehensive coverage of all dbt model types supported by the DJ (Data JSON) Framework. Each model type includes detailed examples, best practices, and integration guidance to help you build effective data pipelines._ diff --git a/docs/setup/README.md b/docs/setup/README.md index 7900be4..2d8ee3a 100644 --- a/docs/setup/README.md +++ b/docs/setup/README.md @@ -1,6 +1,6 @@ # Setup Documentation -Complete setup guides for DJ (dbt-json) framework extension. +Complete setup guides for the DJ (Data JSON) Framework extension. ## Getting Started diff --git a/docs/setup/lightdash-configuration.md b/docs/setup/lightdash-configuration.md index 3c2d05c..5afc9fd 100644 --- a/docs/setup/lightdash-configuration.md +++ b/docs/setup/lightdash-configuration.md @@ -1,6 +1,6 @@ # Lightdash Configuration -The DJ extension provides flexible configuration options for Lightdash integration, allowing you to work with different dbt project structures and multiple projects. +The DJ (Data JSON) Framework provides flexible configuration options for Lightdash integration, allowing you to work with different dbt project structures and multiple projects. **Related Documentation:** @@ -184,7 +184,7 @@ If both are running in Docker containers: #### How It Works -The DJ extension reads `LIGHTDASH_TRINO_HOST` and passes it to dbt via the `DBT_HOST` environment variable when creating Lightdash previews. The dbt profiles.yml uses this to override the connection host. +The DJ (Data JSON) Framework reads `LIGHTDASH_TRINO_HOST` and passes it to dbt via the `DBT_HOST` environment variable when creating Lightdash previews. The dbt profiles.yml uses this to override the connection host. **Important**: Your normal dbt commands (running directly on your Mac) will continue to use `localhost` because the environment variable override only applies to the Lightdash preview subprocess. diff --git a/docs/setup/lightdash-local-setup.md b/docs/setup/lightdash-local-setup.md index c7ab1e4..38dfcc8 100644 --- a/docs/setup/lightdash-local-setup.md +++ b/docs/setup/lightdash-local-setup.md @@ -1,6 +1,6 @@ # Local Lightdash Setup Guide -Quick setup guide for running Lightdash locally with the DJ extension. +Quick setup guide for running Lightdash locally with the DJ (Data JSON) Framework. Lightdash is an open-source BI tool that creates dashboards directly from your dbt models. @@ -54,7 +54,7 @@ docker compose up -d ### 2. Connect Trino and Lightdash Networks (If both in Docker) -**Note**: This is only required if both Trino and Lightdash are running in Docker containers. If Trino is running on your host machine, skip this step and use `host.docker.internal` in the DJ extension configuration (see below). +**Note**: This is only required if both Trino and Lightdash are running in Docker containers. If Trino is running on your host machine, skip this step and use `host.docker.internal` in the DJ (Data JSON) Framework extension configuration (see below). If both are in Docker, connect the networks so they can communicate: @@ -95,9 +95,9 @@ lightdash deploy --create This will prompt you to enter the project name and once created, you should see URL of the newly created project in the output. This will be something like ` http://localhost:8081/createProject/cli?projectUuid=48c5c6d6-3b63-4661-baa3-da07eb446769`. -### Configure DJ Extension for Docker +### Configure DJ (Data JSON) Framework Extension for Docker -When using the DJ extension to create Lightdash previews, you need to configure how Lightdash (running in Docker) connects to Trino. Choose the option that matches your setup: +When using the DJ (Data JSON) Framework extension to create Lightdash previews, you need to configure how Lightdash (running in Docker) connects to Trino. Choose the option that matches your setup: #### Option A: Trino running on host machine (Not in Docker) @@ -139,7 +139,7 @@ This allows Lightdash to connect to Trino using the container name `trino_defaul 3. **Reopen VS Code from terminal**: ```bash - cd /Users/gowthamraj.j/Development/Workday/vscode-dbt-json + cd /Users/gowthamraj.j/Development/Workday/dj code . ``` @@ -158,15 +158,15 @@ Open the URL of the Lightdash project you created from the output in the browser ## Troubleshooting -### Connection Errors with DJ Extension Preview +### Connection Errors with DJ (Data JSON) Framework Extension Preview -If you get connection errors when using the DJ extension's Lightdash preview feature, make sure you've configured the `LIGHTDASH_TRINO_HOST` environment variable as described in the "Configure DJ Extension for Docker" section above. +If you get connection errors when using the DJ (Data JSON) Framework extension's Lightdash preview feature, make sure you've configured the `LIGHTDASH_TRINO_HOST` environment variable as described in the "Configure DJ (Data JSON) Framework Extension for Docker" section above. The environment variable ensures the extension passes the correct Trino hostname to dbt when creating previews. ### Manual Connection Configuration (Lightdash UI) -If you're using `lightdash deploy` directly (not through the DJ extension) and have connection issues, you can manually update the connection settings in the Lightdash UI: +If you're using `lightdash deploy` directly (not through the DJ (Data JSON) Framework extension) and have connection issues, you can manually update the connection settings in the Lightdash UI: 1. Go to `Settings` -> `Project Settings` 2. Click on the `Connection Settings` tab diff --git a/docs/setup/setup.md b/docs/setup/setup.md index 171662d..8dcc806 100644 --- a/docs/setup/setup.md +++ b/docs/setup/setup.md @@ -1,13 +1,13 @@ -# DJ Extension Setup Guide +# DJ (Data JSON) Framework Setup Guide -Get the DJ extension up and running in VS Code. This guide covers extension installation and basic configuration. +Get the DJ (Data JSON) Framework extension up and running in VS Code. This guide covers extension installation and basic configuration. Table of Contents 1. [Install Prerequisites](#1-install-prerequisites) 2. [Install Trino CLI](#2-install-trino-cli) 3. [Configure Trino Connection](#3-configure-trino-connection) -4. [Install DJ Extension](#4-install-dj-extension) +4. [Install DJ (Data JSON) Framework Extension](#4-install-dj-framework-extension) 5. [First Launch](#5-first-launch) 6. [Configure Extension Settings](#6-configure-extension-settings) 7. [Optional: Lightdash Integration](#7-optional-lightdash-integration) @@ -26,7 +26,7 @@ Table of Contents **dbt Project Requirements:** -The DJ extension requires a dbt project with Python dependencies installed: +The DJ (Data JSON) Framework requires a dbt project with Python dependencies installed: - **dbt-core** (minimum version 1.0.0) - **dbt adapter** (e.g., dbt-trino, dbt-postgres, dbt-snowflake) that includes dbt-core @@ -46,7 +46,7 @@ pip install dbt-trino ## 2. Install Trino CLI -The DJ extension uses Trino CLI for data catalog integration and query execution. +The DJ (Data JSON) Framework uses Trino CLI for data catalog integration and query execution. > If you want to use Trino locally, see [Trino Local Setup](trino-local-setup.md) @@ -90,11 +90,11 @@ export TRINO_SCHEMA= - **Local Trino**: `localhost:8080` - **Enterprise**: `trino.company.com:443` -## 4. Install DJ Extension +## 4. Install DJ (Data JSON) Framework Extension 1. **Open VS Code** 2. **Go to Extensions** (Cmd/Ctrl + Shift + X) -3. **Search for "DJ"** or "dbt-json" +3. **Search for "DJ"** or "Data JSON" 4. **Install the extension** > Note: Want to build from source? See the [DEVELOPMENT_SETUP.md](../DEVELOPMENT_SETUP.md) for development setup. @@ -142,7 +142,7 @@ The extension can be configured in the VS Code settings or by adding to the `.vs To configure the extension via VS Code settings, - Open the VS Code settings (Cmd/Ctrl + ,) -- Under "Extensions", select "DJ (dbt-json) Framework". +- Under "Extensions", select "DJ (Data JSON) Framework". - Configure the extension settings as needed. To configure the extension via `.vscode/settings.json`, add the configuration options as needed to the file: @@ -156,7 +156,7 @@ To configure the extension via `.vscode/settings.json`, add the configuration op ## Extension Settings -The DJ extension offers extensive configuration options. For complete documentation with examples, validation details, and troubleshooting, see the **[Settings Reference Guide](../SETTINGS.md)**. +The DJ (Data JSON) Framework offers extensive configuration options. For complete documentation with examples, validation details, and troubleshooting, see the **[Settings Reference Guide](../SETTINGS.md)**. ### Essential Settings diff --git a/docs/setup/trino-local-setup.md b/docs/setup/trino-local-setup.md index e04005c..8d25b5e 100644 --- a/docs/setup/trino-local-setup.md +++ b/docs/setup/trino-local-setup.md @@ -1,6 +1,6 @@ # Local Trino Setup Guide -This guide covers how to set up Trino locally for development with the DJ extension. If you already have Trino running (cloud or on-premises), return to the [main setup guide](setup.md#2-install-trino-cli) instead. +This guide covers how to set up Trino locally for development with the DJ (Data JSON) Framework. If you already have Trino running (cloud or on-premises), return to the [main setup guide](setup.md#2-install-trino-cli) instead. **Related Documentation:** diff --git a/package-lock.json b/package-lock.json index a28fe1d..f9159db 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "dj-framework", - "version": "1.3.6", + "version": "1.3.7", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "dj-framework", - "version": "1.3.6", + "version": "1.3.7", "license": "Apache-2.0", "workspaces": [ "web" @@ -15396,4 +15396,4 @@ } } } -} \ No newline at end of file +} diff --git a/package.json b/package.json index 60f8d75..70d7a43 100644 --- a/package.json +++ b/package.json @@ -1,16 +1,16 @@ { "name": "dj-framework", "author": "Workday", - "displayName": "DJ (dbt-json) Framework", - "description": "An extension for managing dbt projects using Workday's dbt-json (DJ) framework", + "displayName": "DJ (Data JSON) Framework", + "description": "An extension for managing dbt projects using Workday's Data JSON (DJ) Framework", "icon": "assets/images/dj-logo.png", "license": "Apache-2.0", "publisher": "workday", "repository": { "type": "git", - "url": "https://github.com/Workday/vscode-dbt-json.git" + "url": "https://github.com/Workday/dj.git" }, - "version": "1.3.6", + "version": "1.3.7", "workspaces": [ "web" ], @@ -43,8 +43,8 @@ "description": "License: Apache 2.0" }, { - "url": "https://img.shields.io/github/v/release/Workday/vscode-dbt-json", - "href": "https://github.com/Workday/vscode-dbt-json/releases", + "url": "https://img.shields.io/github/v/release/Workday/dj", + "href": "https://github.com/Workday/dj/releases", "description": "GitHub Release" }, { @@ -53,8 +53,8 @@ "description": "dbt Core" }, { - "url": "https://api.scorecard.dev/projects/github.com/Workday/vscode-dbt-json/badge", - "href": "https://scorecard.dev/viewer/?uri=github.com/Workday/vscode-dbt-json", + "url": "https://api.scorecard.dev/projects/github.com/Workday/dj/badge", + "href": "https://scorecard.dev/viewer/?uri=github.com/Workday/dj", "description": "OpenSSF Scorecard" } ], @@ -63,9 +63,9 @@ "theme": "dark" }, "pricing": "Free", - "qna": "https://github.com/Workday/vscode-dbt-json/discussions", + "qna": "https://github.com/Workday/dj/discussions", "bugs": { - "url": "https://github.com/Workday/vscode-dbt-json/issues" + "url": "https://github.com/Workday/dj/issues" }, "activationEvents": [ "workspaceContains:**/dbt_project.yml" @@ -243,7 +243,7 @@ } ], "configuration": { - "title": "DJ (dbt-json framework)", + "title": "DJ (Data JSON) Framework", "type": "object", "properties": { "dj.aiHintTag": { @@ -352,7 +352,7 @@ "error" ], "default": "info", - "description": "Logging level for DJ extension diagnostics." + "description": "Logging level for DJ (Data JSON) Framework diagnostics." }, "dj.materialization.defaultIncrementalStrategy": { "type": "string", @@ -705,7 +705,7 @@ "activitybar": [ { "id": "dj", - "title": "DJ (dbt-json) Framework", + "title": "DJ (Data JSON) Framework", "icon": "assets/images/dj-icon-white.svg" } ], diff --git a/schemas/model.incremental_strategy.schema.json b/schemas/model.incremental_strategy.schema.json index efb0f57..b1071d5 100644 --- a/schemas/model.incremental_strategy.schema.json +++ b/schemas/model.incremental_strategy.schema.json @@ -1,8 +1,8 @@ { "$id": "model.incremental_strategy.schema.json", "title": "Incremental Strategy", - "description": "Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key.", - "markdownDescription": "**Incremental Strategy (dbt-trino)**\n\n| Strategy | When to use | Caveat |\n|---|---|---|\n| `append` | Fast insert-only, no dedup | Upstream must guarantee no duplicates |\n| `delete+insert` | Partition-safe upsert, **safe default** | Requires `unique_key` (auto-derived from partitions) |\n| `merge` | Row-level upsert on a primary key | **dbt-trino requires Iceberg format** |\n| `overwrite_existing_partitions` | Re-drop & rewrite only touched partitions | **Requires a custom dbt macro in your project.** The DJ extension does NOT ship this strategy. If your project does not define `get_incremental_overwrite_existing_partitions_sql` (or the equivalent adapter dispatch macro), use `delete+insert` instead. |\n| `dj_iceberg_partition_overwrite` | Re-drop & rewrite only touched partitions on Iceberg tables | **Shipped by DJ** (`macros/strategies.sql` -> `/macros/_ext_/`). **Requires Iceberg format** (`materialization.format: \"iceberg\"` or project var `storage_type: iceberg`). On Delta Lake / Hive use `delete+insert` instead. |", + "description": "Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key.", + "markdownDescription": "**Incremental Strategy (dbt-trino)**\n\n| Strategy | When to use | Caveat |\n|---|---|---|\n| `append` | Fast insert-only, no dedup | Upstream must guarantee no duplicates |\n| `delete+insert` | Partition-safe upsert, **safe default** | Requires `unique_key` (auto-derived from partitions) |\n| `merge` | Row-level upsert on a primary key | **dbt-trino requires Iceberg format** |\n| `overwrite_existing_partitions` | Re-drop & rewrite only touched partitions | **Requires a custom dbt macro in your project.** The DJ (Data JSON) Framework does NOT ship this strategy. If your project does not define `get_incremental_overwrite_existing_partitions_sql` (or the equivalent adapter dispatch macro), use `delete+insert` instead. |\n| `dj_iceberg_partition_overwrite` | Re-drop & rewrite only touched partitions on Iceberg tables | **Shipped by DJ** (`macros/strategies.sql` -> `/macros/_ext_/`). **Requires Iceberg format** (`materialization.format: \"iceberg\"` or project var `storage_type: iceberg`). On Delta Lake / Hive use `delete+insert` instead. |", "anyOf": [ { "description": "Append: insert new rows without de-duplication. Fastest strategy, but upstream must guarantee no duplicates in the new slice. No unique_key needed.", @@ -31,11 +31,15 @@ "unique_key": { "description": "Override the unique key(s) to use for delete+insert. Defaults to the model's partition column when omitted.", "anyOf": [ - { "type": "string" }, + { + "type": "string" + }, { "type": "array", "minItems": 1, - "items": { "type": "string" } + "items": { + "type": "string" + } } ] } @@ -55,29 +59,37 @@ "unique_key": { "description": "The unique key(s) to use for merging.", "anyOf": [ - { "type": "string" }, + { + "type": "string" + }, { "type": "array", "minItems": 1, - "items": { "type": "string" } + "items": { + "type": "string" + } } ] }, "merge_exclude_columns": { "description": "The columns to exclude when merging.", "type": "array", - "items": { "type": "string" } + "items": { + "type": "string" + } }, "merge_update_columns": { "description": "The columns to update when merging.", "type": "array", - "items": { "type": "string" } + "items": { + "type": "string" + } } } }, { - "description": "Overwrite existing partitions: drop and rewrite only the partitions present in the new slice. REQUIRES a custom dbt macro in your project (e.g. get_incremental_overwrite_existing_partitions_sql). The DJ extension does NOT ship this macro and dbt-trino does NOT provide it natively. The macro derives the partition list from the new slice itself, so unique_key is not applicable for this strategy and is rejected by the schema. If your project does not define this macro, use 'delete+insert' with a partition column as unique_key — it produces equivalent behavior for daily/monthly partitioned models.", - "markdownDescription": "**`overwrite_existing_partitions`** — drop & rewrite only partitions in the new slice.\n\n> **WARNING — CUSTOM MACRO REQUIRED.** This strategy is NOT shipped by the DJ extension and is NOT part of vanilla dbt-trino. Your dbt project must define the dispatch macro (typically `get_incremental_overwrite_existing_partitions_sql`) for it to compile.\n>\n> **`unique_key` is not applicable** — the macro derives the partition list from the new slice itself, so this strategy ignores `unique_key` and the schema rejects it.\n>\n> **Don't have the macro?** Use `{ \"type\": \"delete+insert\" }` instead — if a partition column exists, DJ auto-fills `unique_key` and the behavior is equivalent for partition-aligned daily/monthly incrementals.", + "description": "Overwrite existing partitions: drop and rewrite only the partitions present in the new slice. REQUIRES a custom dbt macro in your project (e.g. get_incremental_overwrite_existing_partitions_sql). The DJ (Data JSON) Framework does NOT ship this macro and dbt-trino does NOT provide it natively. The macro derives the partition list from the new slice itself, so unique_key is not applicable for this strategy and is rejected by the schema. If your project does not define this macro, use 'delete+insert' with a partition column as unique_key — it produces equivalent behavior for daily/monthly partitioned models.", + "markdownDescription": "**`overwrite_existing_partitions`** — drop & rewrite only partitions in the new slice.\n\n> **WARNING — CUSTOM MACRO REQUIRED.** This strategy is NOT shipped by the DJ (Data JSON) Framework and is NOT part of vanilla dbt-trino. Your dbt project must define the dispatch macro (typically `get_incremental_overwrite_existing_partitions_sql`) for it to compile.\n>\n> **`unique_key` is not applicable** — the macro derives the partition list from the new slice itself, so this strategy ignores `unique_key` and the schema rejects it.\n>\n> **Don't have the macro?** Use `{ \"type\": \"delete+insert\" }` instead — if a partition column exists, DJ auto-fills `unique_key` and the behavior is equivalent for partition-aligned daily/monthly incrementals.", "type": "object", "required": ["type"], "additionalProperties": false, @@ -89,7 +101,7 @@ } }, { - "description": "DJ Iceberg partition overwrite: drop and rewrite only the partitions present in the new slice on an Iceberg target. SHIPPED by the DJ extension via macros/strategies.sql (auto-copied to /macros/_ext_/strategies.sql on project refresh). REQUIRES Iceberg format on the target table — set materialization.format='iceberg' or the project var storage_type='iceberg'; otherwise the macro silently degrades to a full-table refresh. The macro derives the partition list from the new slice itself by reading properties.partitioning, so unique_key is not applicable and is rejected by the schema. On Delta Lake / Hive use 'delete+insert' instead.", + "description": "DJ Iceberg partition overwrite: drop and rewrite only the partitions present in the new slice on an Iceberg target. SHIPPED by the DJ (Data JSON) Framework via macros/strategies.sql (auto-copied to /macros/_ext_/strategies.sql on project refresh). REQUIRES Iceberg format on the target table — set materialization.format='iceberg' or the project var storage_type='iceberg'; otherwise the macro silently degrades to a full-table refresh. The macro derives the partition list from the new slice itself by reading properties.partitioning, so unique_key is not applicable and is rejected by the schema. On Delta Lake / Hive use 'delete+insert' instead.", "markdownDescription": "**`dj_iceberg_partition_overwrite`** — drop & rewrite only partitions in the new slice on Iceberg tables.\n\n> **Shipped by DJ.** No consumer setup required — `macros/strategies.sql` is auto-copied to `/macros/_ext_/strategies.sql` when you run **DJ: Refresh Projects**. The dispatch macro is `get_incremental_dj_iceberg_partition_overwrite_sql`.\n>\n> **WARNING — Iceberg format required.** This strategy reads the Iceberg-only `properties.partitioning` config. Set `materialization.format: \"iceberg\"` or the project var `storage_type: iceberg`. On Delta Lake / Hive the macro silently degrades to a full-table refresh — DJ surfaces this as a Problems-tab error.\n>\n> **`unique_key` is not applicable** — the macro derives partitions from the new slice itself, so this strategy ignores `unique_key` and the schema rejects it.\n>\n> **Not on Iceberg?** Use `{ \"type\": \"delete+insert\" }` instead — DJ auto-fills `unique_key` from the partition column.", "type": "object", "required": ["type"], diff --git a/src/extension.ts b/src/extension.ts index 830b232..75f2b71 100644 --- a/src/extension.ts +++ b/src/extension.ts @@ -13,7 +13,7 @@ export async function activate(context: vscode.ExtensionContext) { } catch (error: unknown) { console.error('[DJ] FATAL ERROR during extension activation:', error); vscode.window.showErrorMessage( - `DJ Extension failed to activate: ${error instanceof Error ? error.message : String(error)}`, + `DJ (Data JSON) Framework extension failed to activate: ${error instanceof Error ? error.message : String(error)}`, ); throw error; } diff --git a/src/services/__tests__/agent.test.ts b/src/services/__tests__/agent.test.ts index 0083d46..cf17d3d 100644 --- a/src/services/__tests__/agent.test.ts +++ b/src/services/__tests__/agent.test.ts @@ -42,6 +42,6 @@ describe('Skills', () => { test('_AGENTS.md template exists and contains expected content', () => { const content = fs.readFileSync(AGENTS_TEMPLATE, 'utf-8'); expect(content).toBeTruthy(); - expect(content).toContain('DJ (dbt-json) Framework'); + expect(content).toContain('DJ (Data JSON) Framework'); }); }); diff --git a/src/services/constants.ts b/src/services/constants.ts index 553527d..0711e8d 100644 --- a/src/services/constants.ts +++ b/src/services/constants.ts @@ -1,5 +1,5 @@ /** - * Constants and configuration values for the DJ extension + * Constants and configuration values for the DJ (Data JSON) Framework * Centralized to avoid hardcoded values throughout the codebase */ diff --git a/src/services/framework/index.ts b/src/services/framework/index.ts index d8f3eb2..0836b6d 100644 --- a/src/services/framework/index.ts +++ b/src/services/framework/index.ts @@ -457,7 +457,7 @@ export class Framework implements ApiEnabledService<'framework'> { // 4. Show prompt const answer = await vscode.window.showInformationMessage( - 'The DJ extension uses a .dj folder for local state. Would you like to add it to .gitignore?', + 'The DJ (Data JSON) Framework extension uses a .dj folder for local state. Would you like to add it to .gitignore?', 'Yes', 'No', "Don't ask again", @@ -465,7 +465,7 @@ export class Framework implements ApiEnabledService<'framework'> { if (answer === 'Yes') { const newLine = content.endsWith('\n') ? '\n' : '\n\n'; - const newContent = `${content}${newLine}# DJ Extension\n${DJ_IGNORE_ENTRY}\n`; + const newContent = `${content}${newLine}# DJ (Data JSON) Framework\n${DJ_IGNORE_ENTRY}\n`; fs.writeFileSync(gitignorePath, newContent, 'utf8'); vscode.window.showInformationMessage('Added .dj/ to .gitignore'); diff --git a/src/services/types/config.ts b/src/services/types/config.ts index 8dd2b6f..c68cef7 100644 --- a/src/services/types/config.ts +++ b/src/services/types/config.ts @@ -7,7 +7,7 @@ import type { DefaultIncrementalStrategy } from '@shared/framework/types'; import type { LogLevel } from '@shared/types/common'; /** - * Configuration object from VSCode for the DJ extension. + * Configuration object from VSCode for the DJ (Data JSON) Framework. * Matches all settings defined in package.json. * This is extension-only - the web doesn't need the full config. */ diff --git a/src/shared/schema/types/model.incremental_strategy.schema.d.ts b/src/shared/schema/types/model.incremental_strategy.schema.d.ts index 12ff0c0..c2211fe 100644 --- a/src/shared/schema/types/model.incremental_strategy.schema.d.ts +++ b/src/shared/schema/types/model.incremental_strategy.schema.d.ts @@ -6,7 +6,7 @@ */ /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.materialization.schema.d.ts b/src/shared/schema/types/model.materialization.schema.d.ts index ca1c1ea..9833d4d 100644 --- a/src/shared/schema/types/model.materialization.schema.d.ts +++ b/src/shared/schema/types/model.materialization.schema.d.ts @@ -38,7 +38,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.schema.d.ts b/src/shared/schema/types/model.schema.d.ts index cba2f67..80b4d8d 100644 --- a/src/shared/schema/types/model.schema.d.ts +++ b/src/shared/schema/types/model.schema.d.ts @@ -79,7 +79,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_join_column.schema.d.ts b/src/shared/schema/types/model.type.int_join_column.schema.d.ts index 79d3e60..7b0d0a2 100644 --- a/src/shared/schema/types/model.type.int_join_column.schema.d.ts +++ b/src/shared/schema/types/model.type.int_join_column.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_join_models.schema.d.ts b/src/shared/schema/types/model.type.int_join_models.schema.d.ts index 299530b..f2fc076 100644 --- a/src/shared/schema/types/model.type.int_join_models.schema.d.ts +++ b/src/shared/schema/types/model.type.int_join_models.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_lookback_model.schema.d.ts b/src/shared/schema/types/model.type.int_lookback_model.schema.d.ts index e5364d8..a05da70 100644 --- a/src/shared/schema/types/model.type.int_lookback_model.schema.d.ts +++ b/src/shared/schema/types/model.type.int_lookback_model.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_rollup_model.schema.d.ts b/src/shared/schema/types/model.type.int_rollup_model.schema.d.ts index 35972fd..7d47ec9 100644 --- a/src/shared/schema/types/model.type.int_rollup_model.schema.d.ts +++ b/src/shared/schema/types/model.type.int_rollup_model.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_select_model.schema.d.ts b/src/shared/schema/types/model.type.int_select_model.schema.d.ts index adf5f06..aa15cec 100644 --- a/src/shared/schema/types/model.type.int_select_model.schema.d.ts +++ b/src/shared/schema/types/model.type.int_select_model.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.int_union_models.schema.d.ts b/src/shared/schema/types/model.type.int_union_models.schema.d.ts index 6d5d0f9..5224bfe 100644 --- a/src/shared/schema/types/model.type.int_union_models.schema.d.ts +++ b/src/shared/schema/types/model.type.int_union_models.schema.d.ts @@ -147,7 +147,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.stg_select_model.schema.d.ts b/src/shared/schema/types/model.type.stg_select_model.schema.d.ts index 28b0f20..95cf82a 100644 --- a/src/shared/schema/types/model.type.stg_select_model.schema.d.ts +++ b/src/shared/schema/types/model.type.stg_select_model.schema.d.ts @@ -64,7 +64,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.stg_select_source.schema.d.ts b/src/shared/schema/types/model.type.stg_select_source.schema.d.ts index 7a9b8ec..ea432ea 100644 --- a/src/shared/schema/types/model.type.stg_select_source.schema.d.ts +++ b/src/shared/schema/types/model.type.stg_select_source.schema.d.ts @@ -64,7 +64,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/schema/types/model.type.stg_union_sources.schema.d.ts b/src/shared/schema/types/model.type.stg_union_sources.schema.d.ts index 83583c5..cf52d0a 100644 --- a/src/shared/schema/types/model.type.stg_union_sources.schema.d.ts +++ b/src/shared/schema/types/model.type.stg_union_sources.schema.d.ts @@ -64,7 +64,7 @@ export type SchemaColumnName = string; */ export type SchemaModelPartitions = SchemaColumnName[]; /** - * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ extension. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. + * Incremental Strategy for dbt-trino. Pick one of: 'append', 'delete+insert', 'merge', 'overwrite_existing_partitions', 'dj_iceberg_partition_overwrite'. NOTE: 'overwrite_existing_partitions' requires a custom dbt macro in your project and is not shipped by the DJ (Data JSON) Framework. 'merge' and 'dj_iceberg_partition_overwrite' require the target table to use Iceberg format in dbt-trino. When in doubt, use 'delete+insert' with a partition column as unique_key. */ export type IncrementalStrategy = | { diff --git a/src/shared/web/constants.ts b/src/shared/web/constants.ts index b6932cb..c1fd829 100644 --- a/src/shared/web/constants.ts +++ b/src/shared/web/constants.ts @@ -4,5 +4,5 @@ */ export const EXTERNAL_LINKS = { - documentation: 'https://github.com/Workday/vscode-dbt-json#readme', + documentation: 'https://github.com/Workday/dj#readme', }; diff --git a/templates/_AGENTS.md b/templates/_AGENTS.md index 836c0e9..801522d 100644 --- a/templates/_AGENTS.md +++ b/templates/_AGENTS.md @@ -1,11 +1,11 @@ -# AGENTS.md — DJ (dbt-json) Framework Guide +# AGENTS.md — DJ (Data JSON) Framework Guide -> This file is auto-generated by the Workday DJ VS Code extension. +> This file is auto-generated by the Workday DJ (Data JSON) Framework VS Code extension. > It provides LLMs with the context needed to create and modify `.model.json` and `.source.json` files in this dbt project. ## Overview -This project uses the **DJ (dbt-json) framework** — a JSON-based abstraction layer on top of dbt. Instead of writing raw SQL and YML files by hand, developers author `.model.json` and `.source.json` files. The DJ extension then **auto-generates** the corresponding `.sql` and `.yml` files via a process called "JSON Sync." You should **never** manually edit the generated `.sql` or `.yml` files — only edit the `.model.json` and `.source.json` files. +This project uses the **DJ (Data JSON) Framework** — a JSON-based abstraction layer on top of dbt. Instead of writing raw SQL and YML files by hand, developers author `.model.json` and `.source.json` files. The DJ (Data JSON) Framework then **auto-generates** the corresponding `.sql` and `.yml` files via a process called "JSON Sync." You should **never** manually edit the generated `.sql` or `.yml` files — only edit the `.model.json` and `.source.json` files. All JSON files use the **JSONC** format (JSON with Comments). Trailing commas are allowed. Preserve any existing comments when editing files. @@ -699,7 +699,7 @@ Use the `materialization` field instead of the legacy `materialized` + `incremen | `append` | `{ "type": "append" }` | Fast insert-only; no de-dup | Upstream must guarantee no duplicates in the new slice | | `delete+insert` | `{ "type": "delete+insert", "unique_key": "..." }` | Partition-safe upsert (**safe default**) | `unique_key` is auto-derived from partitions when omitted | | `merge` | `{ "type": "merge", "unique_key": "id", "merge_update_columns": [...], "merge_exclude_columns": [...] }` | Row-level upsert on a primary key | **dbt-trino requires Iceberg format.** Set `materialization.format: "iceberg"` or the project var `storage_type: iceberg` | -| `overwrite_existing_partitions` | `{ "type": "overwrite_existing_partitions" }` | Drop & rewrite only partitions present in the new slice | **Requires a custom dbt macro in your project** (e.g. `get_incremental_overwrite_existing_partitions_sql`). The DJ extension does NOT ship this macro and dbt-trino does NOT provide it natively. `unique_key` is **not applicable** for this strategy the macro derives partitions from the new slice itself, and the schema rejects `unique_key`. If your project does not define the macro, use `{ "type": "delete+insert" }` instead, behavior is equivalent for partition-aligned daily/monthly incrementals when `unique_key` is the partition column. | +| `overwrite_existing_partitions` | `{ "type": "overwrite_existing_partitions" }` | Drop & rewrite only partitions present in the new slice | **Requires a custom dbt macro in your project** (e.g. `get_incremental_overwrite_existing_partitions_sql`). The DJ (Data JSON) Framework does NOT ship this macro and dbt-trino does NOT provide it natively. `unique_key` is **not applicable** for this strategy the macro derives partitions from the new slice itself, and the schema rejects `unique_key`. If your project does not define the macro, use `{ "type": "delete+insert" }` instead, behavior is equivalent for partition-aligned daily/monthly incrementals when `unique_key` is the partition column. | | `dj_iceberg_partition_overwrite` | `{ "type": "dj_iceberg_partition_overwrite" }` | Drop & rewrite only partitions present in the new slice on **Iceberg** tables | **Shipped by DJ.** No consumer macro required, `macros/strategies.sql` is auto-copied to `/macros/_ext_/strategies.sql` on **DJ: Refresh Projects**. The dispatch macro is `get_incremental_dj_iceberg_partition_overwrite_sql`. **Requires Iceberg format**: set `materialization.format: "iceberg"` or project var `storage_type: iceberg`; otherwise DJ flags it in the Problems tab. `unique_key` is **not applicable**, the macro derives partitions from the new slice itself. On Delta Lake / Hive use `{ "type": "delete+insert" }` instead. | ### Legacy Incremental Configuration @@ -720,7 +720,7 @@ Still supported but prefer `materialization` above: ## Portal-Specific Columns -The DJ framework automatically adds these columns: +The DJ (Data JSON) Framework automatically adds these columns: ### `portal_source_count` @@ -1045,7 +1045,7 @@ Subqueries can appear in `where`, `having`, and join `on` conditions via the `su ## Scheduling & ETL -The DJ framework uses an ETL scheduler (via Airflow) that determines **which event dates** need to be processed. This is driven by source configurations: +The DJ (Data JSON) Framework uses an ETL scheduler (via Airflow) that determines **which event dates** need to be processed. This is driven by source configurations: 1. **Sources with `event_count` ETL type**: The scheduler queries source tables to detect which dates have new or changed rows, then runs only those dates through the downstream model DAG. 2. **Sources with `run_schedule` ETL type**: The scheduler triggers downstream models on a fixed cron schedule. @@ -1076,7 +1076,7 @@ When adding a new model to the project: 4. **Place the file** in the correct directory: `models////` 5. **Name the file**: `______.model.json` 6. If reading from a new external table, **create a `.source.json` file** first -7. **Do NOT** create or edit `.sql` or `.yml` files — they are auto-generated by the DJ extension +7. **Do NOT** create or edit `.sql` or `.yml` files — they are auto-generated by the DJ (Data JSON) Framework When adding a new source: @@ -1181,7 +1181,7 @@ You can also look at existing `.model.json` and `.source.json` files in the `mod **Never assume a column exists — always verify it in the upstream definition.** This prevents referencing columns that don't exist. -12. **To rename or move a model, update its `.model.json` fields — not the filename.** Change the `group`, `topic`, and/or `name` fields inside the JSON file. The DJ extension will automatically rename/move the file and regenerate the corresponding `.sql` and `.yml` files to match. Do not manually rename or move model files on disk. +12. **To rename or move a model, update its `.model.json` fields — not the filename.** Change the `group`, `topic`, and/or `name` fields inside the JSON file. The DJ (Data JSON) Framework will automatically rename/move the file and regenerate the corresponding `.sql` and `.yml` files to match. Do not manually rename or move model files on disk. 13. **CTE `group_by` must not use bare string aliases for computed columns.** If a CTE select item has `"name": "month", "expr": "DATE_TRUNC('MONTH', event_date)"`, using `"group_by": ["month"]` will pass schema validation but fail at Trino with `COLUMN_NOT_FOUND`. Use `"group_by": "dims"` or `"group_by": [{ "expr": "DATE_TRUNC('MONTH', event_date)" }]` instead. 14. **CTE bulk selects support `exclude`/`include` filters.** `all_from_cte`, `dims_from_cte`, and `fcts_from_cte` accept `exclude` and `include` arrays, matching the behavior of model-level bulk selects. Plain string column selects in CTEs inherit their `dim`/`fct` type from the upstream model or CTE. 15. **Source freshness can be disabled.** Set `"freshness": null` at source level or table level to disable dbt freshness checks. Individual tables can override the source-level `loaded_at_field`. diff --git a/templates/skills/dj-create-new-model/_SKILL.md b/templates/skills/dj-create-new-model/_SKILL.md index 699d3c5..2985130 100644 --- a/templates/skills/dj-create-new-model/_SKILL.md +++ b/templates/skills/dj-create-new-model/_SKILL.md @@ -4,7 +4,7 @@ description: >- Create a DJ .model.json file for a new dbt model. Use when the user wants to create, add, or scaffold a dbt model -- staging, intermediate, or mart -- including joins, CTEs, rollup, subqueries, or aggregations. -compatibility: DJ extension workspace with .dj/schemas/ and .agents/dj/AGENTS.md +compatibility: DJ (Data JSON) Framework extension workspace with .dj/schemas/ and .agents/dj/AGENTS.md metadata: dj-framework-skill: '1.0' --- diff --git a/tests/README.md b/tests/README.md index 6387b5c..ae5162a 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,10 +1,10 @@ # Tests -This directory contains the test suite for the DJ (dbt-json) extension, which validates that the framework correctly converts JSON model definitions to SQL and YAML files. +This directory contains the test suite for the DJ (Data JSON) Framework extension, which validates that the framework correctly converts JSON model definitions to SQL and YAML files. ## Overview -The DJ extension allows defining dbt models using JSON schemas, which are then automatically converted to: +The DJ (Data JSON) Framework extension allows defining dbt models using JSON schemas, which are then automatically converted to: - **SQL files**: Executable dbt model code - **YAML files**: dbt model properties and documentation @@ -29,7 +29,7 @@ tests/ **What are fixtures?** Test data files that represent real, working examples from the `docs/examples/jaffle_shop` project. -**Coverage:** Staging, intermediate, and mart models that test all DJ framework capabilities. +**Coverage:** Staging, intermediate, and mart models that test all DJ (Data JSON) Framework capabilities. ## Usage @@ -78,7 +78,7 @@ npm run fixtures:update ## Contributing -When contributing to the DJ framework: +When contributing to the DJ (Data JSON) Framework: 1. **Always update fixtures** after model changes (`npm run fixtures:update`) 2. **Run tests** before submitting PRs diff --git a/web/package.json b/web/package.json index 5a9c594..bfe2cbf 100644 --- a/web/package.json +++ b/web/package.json @@ -6,7 +6,7 @@ "publisher": "workday", "repository": { "type": "git", - "url": "https://github.com/Workday/vscode-dbt-json.git" + "url": "https://github.com/Workday/dj.git" }, "type": "module", "version": "0.1.0", From c978bc4aa2522c4f535b5f66b676c105638509cb Mon Sep 17 00:00:00 2001 From: Gowtham Raj J Date: Mon, 4 May 2026 21:23:45 +0530 Subject: [PATCH 2/2] re add tagignore --- .tagignore | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 .tagignore diff --git a/.tagignore b/.tagignore new file mode 100644 index 0000000..736970d --- /dev/null +++ b/.tagignore @@ -0,0 +1,16 @@ +# Files/directories to ignore when determining if a tag should be created +docs/ +.github/ +.prettierignore +.prettierrc +.eslintrc.json +.eslintcache +.gitignore +.vscodeignore +DEVELOPMENT_SETUP.md +CONTRIBUTING.md +CODE_OF_CONDUCT.md +LICENSE.md +NOTICE +Makefile +.tagignore \ No newline at end of file