Skip to content

Python Deterministic Token Trimming for Message Truncation#765

Merged
themaherkhalil merged 4 commits intodevfrom
deterministic-token-trimming
May 4, 2025
Merged

Python Deterministic Token Trimming for Message Truncation#765
themaherkhalil merged 4 commits intodevfrom
deterministic-token-trimming

Conversation

@ryanweiler92
Copy link
Copy Markdown
Collaborator

Description

Updating the truncation strategy for messages exceeding the context window to use deterministic token trimming.

Changes Made

Added the _truncate_by_tokens() method to the OpenAIChatCompletionClient class.

The new truncation strategy works as follows:

  1. Encode every message once
  2. If over the safe window, drop oldest messages until only one needs trimming
  3. Clip tokens from the front of that last message
  4. Guaranteed ≤ safe window in a single pass

How to Test

from ai_server import ModelEngine, ServerClient

server_connection = ServerClient(
    base="http://localhost:9090/Monolith/api",
    access_key="your_access_key",
    secret_key="your_secret_key",
)

is_connected = server_connection.connected
print(f"Am I connected to the server? {is_connected}")


model = ModelEngine(
    engine_id="4acbe913-df40-4ac0-b28a-daa5ad91b172",
    insight_id=server_connection.cur_insight,
)


def generate_large_token_string(target_tokens=130000):
    base_text = "This is a test sentence to generate many tokens. "

    large_string = base_text * (target_tokens // 6 + 100)

    print(f"Generated string length: {len(large_string)} characters")
    print(f"Estimated tokens: ~{len(large_string) // 4}")

    return large_string


question = "What weighs more, a pound of feathers or a pound of bricks?"
context = "You are a helpful assistant. " + generate_large_token_string(130000)

try:
    response = model.ask(
        question=question,
        context=context,
        param_dict={"stream": False},
    )
    print(response)
except Exception as e:
    print(f"Error: {e}")

Notes

@ryanweiler92 ryanweiler92 self-assigned this May 1, 2025
@github-actions
Copy link
Copy Markdown

github-actions bot commented May 1, 2025

@CodiumAI-Agent /describe

@QodoAI-Agent
Copy link
Copy Markdown

Title

Python Deterministic Token Trimming for Message Truncation


User description

Description

Updating the truncation strategy for messages exceeding the context window to use deterministic token trimming.

Changes Made

Added the _truncate_by_tokens() method to the OpenAIChatCompletionClient class.

The new truncation strategy works as follows:

  1. Encode every message once
  2. If over the safe window, drop oldest messages until only one needs trimming
  3. Clip tokens from the front of that last message
  4. Guaranteed ≤ safe window in a single pass

How to Test

from ai_server import ModelEngine, ServerClient

server_connection = ServerClient(
    base="http://localhost:9090/Monolith/api",
    access_key="your_access_key",
    secret_key="your_secret_key",
)

is_connected = server_connection.connected
print(f"Am I connected to the server? {is_connected}")


model = ModelEngine(
    engine_id="4acbe913-df40-4ac0-b28a-daa5ad91b172",
    insight_id=server_connection.cur_insight,
)


def generate_large_token_string(target_tokens=130000):
    base_text = "This is a test sentence to generate many tokens. "

    large_string = base_text * (target_tokens // 6 + 100)

    print(f"Generated string length: {len(large_string)} characters")
    print(f"Estimated tokens: ~{len(large_string) // 4}")

    return large_string


question = "What weighs more, a pound of feathers or a pound of bricks?"
context = "You are a helpful assistant. " + generate_large_token_string(130000)

try:
    response = model.ask(
        question=question,
        context=context,
        param_dict={"stream": False},
    )
    print(response)
except Exception as e:
    print(f"Error: {e}")

Notes


PR Type

Enhancement


Description

  • Implement deterministic ChatML token trimming

  • Add safe encode/decode tokenizer methods

  • Extend model limits with new Llama model

  • Pass context_window and max tokens to tokenizer


Changes walkthrough 📝

Relevant files
Configuration changes
model_limits.py
Add Llama-3.1-8B-Instruct model limits                                     

py/genai_client/model_limits.py

  • Added meta-llama/Llama-3.1-8B-Instruct limits
  • Kept fallback FALLBACK_CONFIG unchanged
  • +4/-0     
    openai_api_inference_server.py
    Pass token limit args to tokenizer                                             

    py/genai_client/text_generation/openai_clients/openai_api_inference_server.py

  • Forwarded max_completion_tokens to tokenizer
  • Forwarded context_window to tokenizer init
  • +2/-0     
    Enhancement
    openai_chat_completion_client.py
    Implement deterministic token trimming logic                         

    py/genai_client/text_generation/openai_clients/openai_chat_completion_client.py

  • Replaced _truncate_messages with _truncate_by_tokens
  • Tokenize once, drop oldest or trim token start
  • Simplified check_token_limits truncation call
  • +54/-98 
    huggingface_tokenizer.py
    Add safe encode/decode tokenizer helpers                                 

    py/genai_client/tokenizers/huggingface_tokenizer.py

  • Added _safe_encode method without special tokens
  • Added _safe_decode fallback for string tokens
  • +17/-0   
    openai_tokenizer.py
    Implement safe encode/decode for OpenAI tokenizer               

    py/genai_client/tokenizers/openai_tokenizer.py

  • Added _safe_encode for text-to-token conversion
  • Added _safe_decode for token-to-text conversion
  • +12/-0   

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link
    Copy Markdown

    github-actions bot commented May 1, 2025

    @CodiumAI-Agent /review

    @github-actions
    Copy link
    Copy Markdown

    github-actions bot commented May 1, 2025

    @CodiumAI-Agent /improve

    @QodoAI-Agent
    Copy link
    Copy Markdown

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Truncation Logic

    Verify that _truncate_by_tokens correctly computes and slices tokens so that only the oldest messages are dropped first and then exactly trims the remaining message to fit within safe_window, while preserving the original message order and roles.

    def _truncate_by_tokens(
        self,
        messages: List[dict],
        safe_window: int,
        keep_system: bool = True,
    ) -> List[dict]:
        """
        Returns a ChatML history whose **total** token count
        is ≤ safe_window.
        Oldest non-system messages are dropped first; when only
        one message needs trimming we cut tokens from its *start*.
        """
    
        # --- Tokenise *once* ----------------------------------------
        toks_per_msg = []
        total = 0
        for m in messages:
            toks = self.tokenizer._safe_encode(m["content"])
            toks_per_msg.append(toks)
            total += len(toks)
    
        if total <= safe_window:
            return messages  # nothing to do
    
        to_cut = total - safe_window  # exact excess
        keep_flags = [True] * len(messages)
    
        # --- Build truncation order ---------------------------------
        # oldest->newest, but leave system message till the end
        order = list(range(len(messages)))
        if keep_system and messages and messages[0]["role"] == "system":
            order = order[1:] + [0]
    
        # --- Drop or trim -------------------------------------------
        for idx in order:
            if to_cut == 0:
                break
            toks = toks_per_msg[idx]
            if len(toks) <= to_cut:
                # drop whole message
                keep_flags[idx] = False
                to_cut -= len(toks)
            else:
                # keep tail part of this message
                toks_per_msg[idx] = toks[-(len(toks) - to_cut) :]
                to_cut = 0
    
        # --- Re-build ChatML ----------------------------------------
        new_messages = []
        for keep, m, toks in zip(keep_flags, messages, toks_per_msg):
            if not keep:
                continue
            m = m.copy()
            m["content"] = self.tokenizer._safe_decode(toks)
            new_messages.append(m)
        return new_messages
    Safe encode/decode

    Ensure _safe_encode and _safe_decode handle both HF and fallback WordCountTokenizer cases consistently, especially regarding special tokens and the format of returned token lists.

    def _safe_encode(self, text: str) -> List:
        """
        Encode without special tokens; works for real HF tokenizers
        and for the fallback WordCountTokenizer.
        """
        try:
            return self.tokenizer.encode(text, add_special_tokens=False)
        except TypeError:  # WordCountTokenizer accepts no kwarg
            return self.tokenizer.encode(text)
    
    def _safe_decode(self, tokens: List) -> str:
        # WordCountTokenizer returns str tokens; join them
        if tokens and isinstance(tokens[0], str):
            return " ".join(tokens)
        # HF tokenizer returns ints; use its native decode
        return self.tokenizer.decode(tokens)

    @QodoAI-Agent
    Copy link
    Copy Markdown

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Impact
    General
    Use constants for pop keys

    Replace the hard-coded string keys with the defined constants to avoid typos and
    keep key usage consistent across the codebase.

    py/genai_client/text_generation/openai_clients/openai_api_inference_server.py [20-21]

    -max_completion_tokens=init_args.pop("max_completion_tokens", None),
    -context_window=init_args.pop("context_window", None),
    +max_completion_tokens=init_args.pop(MAX_COMPLETION_TOKENS, None),
    +context_window=init_args.pop(CONTEXT_WINDOW, None),
    Suggestion importance[1-10]: 5

    __

    Why: Using defined constants (MAX_COMPLETION_TOKENS, CONTEXT_WINDOW) improves consistency and reduces typo risk.

    Low
    Break after trimming

    Add a break immediately after trimming to exit the loop as soon as the required
    tokens are removed.

    py/genai_client/text_generation/openai_clients/openai_chat_completion_client.py [262-265]

     else:
         # keep tail part of this message
    -    toks_per_msg[idx] = toks[-(len(toks) - to_cut) :]
    +    toks_per_msg[idx] = toks[-(len(toks) - to_cut):]
         to_cut = 0
    +    break
    Suggestion importance[1-10]: 3

    __

    Why: Adding break exits the loop immediately after trimming and avoids an extra iteration, though the existing check already breaks next cycle.

    Low
    Possible issue
    Preserve system prompt

    Ensure the system prompt is never removed when keep_system=True by skipping
    drop/truncate logic for messages with role "system".

    py/genai_client/text_generation/openai_clients/openai_chat_completion_client.py [254-265]

     for idx in order:
         if to_cut == 0:
             break
    +    # preserve system prompt if requested
    +    if keep_system and messages[idx]["role"] == "system":
    +        continue
         toks = toks_per_msg[idx]
         if len(toks) <= to_cut:
             # drop whole message
             keep_flags[idx] = False
             to_cut -= len(toks)
         else:
             # keep tail part of this message
    -        toks_per_msg[idx] = toks[-(len(toks) - to_cut) :]
    +        toks_per_msg[idx] = toks[-(len(toks) - to_cut):]
             to_cut = 0
    Suggestion importance[1-10]: 4

    __

    Why: Skipping truncation for the system message enforces preserving it but may prevent meeting safe_window when all other messages are dropped.

    Low

    @themaherkhalil themaherkhalil merged commit 943e41c into dev May 4, 2025
    3 checks passed
    @themaherkhalil themaherkhalil deleted the deterministic-token-trimming branch May 4, 2025 22:17
    @github-actions
    Copy link
    Copy Markdown

    github-actions bot commented May 4, 2025

    @CodiumAI-Agent /update_changelog

    @QodoAI-Agent
    Copy link
    Copy Markdown

    Changelog updates: 🔄

    2025-05-04 [*][https://github.com//pull/765]

    Changed

    • Implement deterministic token trimming for message truncation via a new _truncate_by_tokens method
    • Add _safe_encode/_safe_decode support in tokenizers and update context window/config handling

    to commit the new content to the CHANGELOG.md file, please type:
    '/update_changelog --pr_update_changelog.push_changelog_changes=true'

    manamittal added a commit that referenced this pull request May 20, 2025
    * fix(python): handle eval when it is a single line execution but there is string input with space (#756)
    
    * Update Dockerfile.tomcat (#757)
    
    * fix: tomcat builder setting env var
    
    * fix: updating tomcat to 9.0.104
    
    * Update Dockerfile.ubuntu22.04
    
    * Update Dockerfile.ubuntu22.04
    
    * Update Dockerfile.ubuntu22.04
    
    * feat: creating KubernetesModelScaler class (#763)
    
    * Update Dockerfile.ubuntu22.04
    
    * feat: adding ability to attach a file to a vector db source (#736)
    
    * Added AttachSourceToVectorDbReactor for uploading pdf file to an existing csv file and modified VectorFileDownloadReactor
    
    * fix: proper return for the download and matching the reactor name
    
    * fix: error for downloading single file vs multiple; error for copyToDirectory instead of copyFile
    
    * chore: renaming so reactor matches VectorFileDownload
    
    ---------
    
    Co-authored-by: Maher Khalil <themaherkhalil@gmail.com>
    
    * Update Dockerfile.ubuntu22.04
    
    * Update ubuntu2204.yml
    
    * Update ubuntu2204.yml
    
    * Update ubuntu2204_cuda.yml
    
    * Update Dockerfile.nvidia.cuda.12.5.1.ubuntu22.04
    
    * Update ubuntu2204_cuda.yml
    
    * Update ubuntu2204.yml
    
    * feat: exposing tools calling through models (#764)
    
    * 587 unit test for prernadsutil (#654)
    
    * test(unit): unit tests for the prerna.util.ds package
    
    * test(unit): unit tests for the prerna.util.ds.flatfile package
    
    * test(unit): removed reflections, added paraquet tests
    
    * test(unit): unit tests for the prerna.util.ds package
    
    * test(unit): unit tests for the prerna.util.ds.flatfile package
    
    * test(unit): removed reflections, added paraquet tests
    
    * Update ubuntu2204.yml
    
    * Update ubuntu2204.yml
    
    * Update ubuntu2204.yml
    
    * fix: update pipeline docker buildx version
    
    * fix: ignore buildx
    
    * fix: adjusting pipeline for cuda
    
    * feat: switching dynamic sas to default false (#766)
    
    * fix: changes to account for version 2.0.0 of pyjarowinkler (#769)
    
    * chore: using 'Py' instead of 'py' to be consistent (#770)
    
    * feat: full ast parsing of code to return evaluation of the last expression (#771)
    
    * Python Deterministic Token Trimming for Message Truncation (#765)
    
    * feat: deterministic-token-trimming
    
    * feat: modifying logic such that system prompt is second to last message for truncation
    
    ---------
    
    Co-authored-by: Maher Khalil <themaherkhalil@gmail.com>
    
    * fix: added date added column to enginepermission table (#768)
    
    * fix: add docker-in-docker container to run on sef-hosted runner (#773)
    
    Co-authored-by: Raul Esquivel <resmas.work@gmail.com>
    
    * fix: properly passing in the parameters from kwargs/smss into model limits calculation (#774)
    
    * fix: removing legacy param from arguments (#777)
    
    * fix: Fix docker cache build issue (#778)
    
    * adding no cache
    
    * adding no cache
    
    * feat: Adding Semantic Text Splitting & Token Text Splitting (#720)
    
    * [696] - build - Add chonky semantic text splitting - Added the function for chonky semantic text splitting and integrated with existing flow.
    
    * [696] - build - Add chonky semantic text splitting - Updated the code
    
    * [696] - build - Add chonky semantic text splitting - Updated the code comments
    
    * feat: adding reactor support through java
    
    * feat: updating pyproject.toml with chonky package
    
    * feat: check for default chunking method in smss
    
    * [696] - feat - Add chonku semantic text splitting - Resolved the conflicts
    
    * [696] - feat - Add chonky semantic text splitting - Organized the code.
    
    * feat: adding chunking by tokens and setting as default
    
    * updating comments on chunking strategies
    
    ---------
    
    Co-authored-by: Weiler, Ryan <ryanweiler92@gmail.com>
    Co-authored-by: kunal0137 <kunal0137@gmail.com>
    
    * feat: allowing for tools message in full prompt (#780)
    
    * UPDATE ::: Add docker in docker Dockerfiler (#784)
    
    * add docker in docker Dockerfile
    
    * Update Dockerfile.dind
    
    Remove python and tomcat arguments from Dockerfile
    
    * fix: remove-paddle-ocr (#786)
    
    * [#595] test(unit): adds unit test for prerna.engine.impl.model.kserve
    
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * feat: Tag semoss image (#789)
    
    * adding changes for non-release docker build
    
    * adding non-release build logic to cuda-semoss builder
    
    * updating push branches
    
    * fix: branch names on docker builds
    
    * fix: branch names on docker builds cuda
    
    * fix: adding push condition - change to pyproject toml file; adding event input vars to env vars (#790)
    
    * fix: python builder toml file change (#792)
    
    * fix: Catch errors when calling pixels from Python (#787)
    
    Co-authored-by: Weiler, Ryan <ryanweiler92@gmail.com>
    
    * Creating db links between engines and default apps (#693)
    
    * create db links between engine and default app
    
    * Rename column APPID to TOOL_APP
    
    * feat: add database_tool_app to getUserEngineList
    
    ---------
    
    Co-authored-by: Weiler, Ryan <ryanweiler92@gmail.com>
    
    * Adding sort options to the myengines reactor (#479)
    
    * added sort feature to MyEnginesReactor and genericized reactor imports
    
    * formatting
    
    * overloading method
    
    * validate sortList
    
    ---------
    
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * feat: cleaning up unused imports in MyEngine reactor (#793)
    
    * feat: Create Enum projectTemplate and update CreateAppFromTemplateReactor to accept existing appID for cloning applications (#621)
    
    Co-authored-by: kunal0137 <kunal0137@gmail.com>
    
    * Update GetEngineUsageReactor.java (#417)
    
    Co-authored-by: Maher Khalil <themaherkhalil@gmail.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * Issue 596: Adds Unit Tests for prerna/engine/impl/model/responses and workers (#727)
    
    * [#596] test(unit): adds unit tests
    
    * fix: implements ai-agents suggestions
    
    ---------
    
    Co-authored-by: Jeff Vitunac <jvitunac@gmail.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * 609 implement native blob storage for azure gcp and aws (#674)
    
    * Initial commit : implementation for azure blob storage
    
    * added dependency for azure in pom.xml
    
    * update logic to fetch the metadata from list details
    
    * changed functionality from listing containers to listing files within a selected container
    
    * initial commit for google cloud storage implementation
    
    * added field contant in enum class and removed unused method
    
    * add methods to parse comma-separated local and cloud paths
    
    * add methods to parse comma-separated local and cloud paths
    
    * implementation for aws s3 bucket
    
    * normalize container prefix path
    
    * merged all: implementation for azure, aws and gcp
    
    * refactor(storage): replace manual path normalization with normalizePath from Utility class
    
    ---------
    
    Co-authored-by: pvijayaraghavareddy <pvijayaraghavareddy@WORKSPA-6QV71G7.us.deloitte.com>
    Co-authored-by: Parth <parthpatel3@deloitte.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * Get Node Pool Information for Remote Models (#806)
    
    * 590 unit test for prernaengineimpl (#808)
    
    * test(unit): update to filesystems hijacking for testing files
    
    * test: start of unit tests for abstract database engine
    
    * test(unit): added unit test for prerna.engine.impl
    
    * test(unit): finsihed tests for prerna.engine.impl
    
    * test(unit): adding back unused assignment
    
    ---------
    
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * Creating WordCountTokenizer Class (#802)
    
    * feat: creating word count tokenizer class && falling back to word count tokenizer if tiktok fails
    
    * feat: updating comment
    
    * feat: setting default chunking method as recursive (#810)
    
    * Unit tests fixes and Unit test Class file location updates (#812)
    
    * test(unit): moved tests to correct packages
    
    * test(unit): fixed a couple of unit tests
    
    * VectorDatabaseQueryReactor: output divider value for word doc chunks always 1 (#804)
    
    * Code implementation for #733
    
    * feat: Added code to resolve Divider page issue
    
    * Console output replaced by LOGGERs as per review comments
    
    * feat: replaced Console with Loggers
    
    ---------
    
    Co-authored-by: Varaham <katchabi50@gmail.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * GetCurrentUserReactor (#818)
    
    Adding GetCurrentUserReactor to return user info including if user is an admin.
    
    * Python User Class (#819)
    
    * fix: trimming properties read from smss; fix: logging commands before executing (#821)
    
    * Updating getNodePoolsInfo() to parse and return zk info and models active actual (#822)
    
    * feat: update get node pool information for zk info and models active actual
    
    * feat: get remote model configs
    
    * Add unit tests for package prerna\engine\impl\vector (#728)
    
    * Create ChromaVectorDatabaseEngineUnitTests.java
    
    * completed tests for ChromaVectorDatabaseEngine class
    
    * [#604] test(unit): Created ChromaVectorDatabaseEngine unit tests
    
    * [604] tests(unit) : Completed test cases for ChromaVectorDatabaseEngine; update File operations to nio operations in ChromaVectorDatabaseEngine.java
    
    * [#604] tests(unit): added unit tests for all vector database engines and util classes in the prerna\engine\impl\vector package
    
    * [604] test(unit): replaced creating file paths with string literals with java.nio Paths.resolve/Paths.get methods
    
    ---------
    
    Co-authored-by: Maher Khalil <themaherkhalil@gmail.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * feat: adding to the return of getenginemetadata (#813)
    
    * feat: adding to the return of getenginemetadata
    
    * fix: removing throws
    
    ---------
    
    Co-authored-by: Arash Afghahi <48933336+AAfghahi@users.noreply.github.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    
    * 718 create a single reactor to search both engines and apps (#794)
    
    * feat(engineProject): Initial commit
    
    * chore: 718 create a single reactor to search both engines and apps
    
    * chore: 718 create a single reactor to search both engines and apps
    
    ---------
    
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    Co-authored-by: Vijayaraghavareddy <pvijayaraghavareddy@deloitte.com>
    
    * feat: update openai wrapper to handle multiple images (#832)
    
    * feat: adding user room map (#840)
    
    * feat: hiding side menu bar for non admins (#833)
    
    * Side menu changes
    
    * Review Comments fixed
    
    * Flag is renamed in  Constants.java
    
    * Review Comment fixed in Utility.java
    
    * fix: cleaning up defaults and comments
    
    ---------
    
    Co-authored-by: kunal0137 <kunal0137@gmail.com>
    
    ---------
    
    Co-authored-by: Maher Khalil <themaherkhalil@gmail.com>
    Co-authored-by: kunal0137 <kunal0137@gmail.com>
    Co-authored-by: Ryan Weiler <ryanweiler92@gmail.com>
    Co-authored-by: ManjariYadav2310 <manjayadav@deloitte.com>
    Co-authored-by: dpartika <dpartika@deloitte.com>
    Co-authored-by: Raul Esquivel <resmas.work@gmail.com>
    Co-authored-by: Pasupathi Muniyappan <pasupathi.muniyappan@kanini.com>
    Co-authored-by: resmas-tx <131498457+resmas-tx@users.noreply.github.com>
    Co-authored-by: AndrewRodddd <62724891+AndrewRodddd@users.noreply.github.com>
    Co-authored-by: radkalyan <107957324+radkalyan@users.noreply.github.com>
    Co-authored-by: samarthKharote <samarth.kharote@kanini.com>
    Co-authored-by: Shubham Mahure <shubham.mahure@kanini.com>
    Co-authored-by: rithvik-doshi <81876806+rithvik-doshi@users.noreply.github.com>
    Co-authored-by: Mogillapalli Manoj kumar <86736340+Khumar23@users.noreply.github.com>
    Co-authored-by: Jeff Vitunac <jvitunac@gmail.com>
    Co-authored-by: pvijayaraghavareddy <pvijayaraghavareddy@WORKSPA-6QV71G7.us.deloitte.com>
    Co-authored-by: Parth <parthpatel3@deloitte.com>
    Co-authored-by: KT Space <119169984+Varaham@users.noreply.github.com>
    Co-authored-by: Varaham <katchabi50@gmail.com>
    Co-authored-by: ericgonzal8 <ericgonzalez8@deloitte.com>
    Co-authored-by: Arash Afghahi <48933336+AAfghahi@users.noreply.github.com>
    Co-authored-by: Vijayaraghavareddy <pvijayaraghavareddy@deloitte.com>
    Co-authored-by: ammb-123 <ammb@deloitte.com>
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Labels

    None yet

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    3 participants