Implement learning to an AI Model

Nidhi Priya 571 Reputation points
2025-12-11T12:35:43.4233333+00:00

Hello experts!

We’ve built an AI-powered time tracking application that automatically logs time intervals for Teams meetings. Our app uses the Microsoft Graph API to fetch meetings from the Teams calendar, and we pull task data from Intervals.

We’ve deployed GPT‑5 in Azure, which receives the raw meeting data and corresponding task lists. The LLM performs semantic and contextual matching to align meeting names with task names. The whole process is fully automated.

However, sometimes the LLM fails to find a confident or accurate match — in such cases, the meeting is sent into a review stage, where the user can manually select the correct task. In other scenarios, it might map a meeting to the wrong task altogether, causing time entries to post incorrectly.

We want to enhance this system by adding a memory layer — essentially, enabling learning from user feedback. When a meeting goes into review and the user manually corrects the task mapping, the LLM should “remember” that correction. The next time a similar meeting appears, the model should leverage that past feedback to produce a better mapping automatically.

Has anyone implemented a similar memory or adaptive learning mechanism for LLM-based apps on Azure or with OpenAI APIs? Would it make more sense to use a vector database, fine-tuning, or some form of dynamic retrieval memory?

Any insights or examples would be greatly appreciated!

Azure AI Language
Azure AI Language
An Azure service that provides natural language capabilities including sentiment analysis, entity extraction, and automated question answering.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Q&A Assist
    2025-12-11T12:35:57.9866667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    To implement a memory layer for your AI-powered time tracking application, you can consider a few approaches:

    1. Vector Database: Using a vector database can help store embeddings of past user corrections. When a meeting goes into review, you can retrieve similar past corrections based on the vector representations of the meeting data. This allows the model to leverage historical user feedback to improve future mappings.
    2. Fine-Tuning: Fine-tuning your LLM on a dataset that includes past corrections can enhance its ability to make accurate mappings. This approach involves retraining the model on specific examples where users have corrected mappings, allowing it to learn from those instances.
    3. Dynamic Retrieval Memory: Implementing a dynamic retrieval memory system can allow your application to access recent corrections and user feedback in real-time. This can be combined with the LLM to inform its decision-making process during the mapping of meetings to tasks.

    Each of these methods has its advantages, and the best choice may depend on the specific requirements of your application, such as the volume of data, the frequency of corrections, and the desired responsiveness of the system. Combining these approaches could also yield better results, allowing for both immediate feedback incorporation and long-term learning.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.