An Azure service that provides an enterprise-grade analytics engine.
Hello Swaroop Gowda D,
Thank you for reaching out to the Microsoft Q&A forum.
you’ve hit on a key point: the 30 GB “daily cap” you set on your Application Insights resource only throttles what AI itself will ingest. Once AI ships its data into your Log Analytics workspace, it’s subject to that workspace’s own ingestion (and billing) settings. Any other service or diagnostic setting pointing at that same workspace will add to your daily ingest total and drive up costs.
Here’s a quick summary and some cost-saving recommendations:
1.Understand the two caps are separate
- Application Insights cap → limits only the AI SDK/agent data
- Log Analytics cap → limits everything landing in that workspace (AI + other Azure diagnostics + custom logs)
2.Identify what’s chewing up your budget
- In the Azure portal, go into your Log Analytics workspace → Usage + estimated costs → Daily usage → see which tables/solutions spike the most
- Under Settings → Solutions, review each solution’s Usage + estimated costs and consider disabling or capping noisy ones
3.Cost-optimization best practices
• Sampling in Application Insights
– Enable adaptive or fixed sampling in your SDK to drop a percentage of low-value telemetry before it’s sent
– More on sampling: https://dori-uw-1.kuma-moon.com/azure/azure-monitor/app/opentelemetry-sampling
• Filter at the source with Diagnostic Settings or Data Collection Rules (DCRs)
– Only send the categories/levels of logs you truly need (e.g. drop “Verbose” or health-check chatter)
– DCR docs: https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/data-collection-rule-overview
• Per-solution daily quotas in Log Analytics
– Under Workspace → Settings → Solutions → Usage + estimated costs → Edit daily quota
• Table-level retention & tiering
– In Workspace → Usage + estimated costs → Data retention, you can lower default days
– For individual tables, go to Workspace → Tables → select a table → Manage table → Data retention settings
– Consider moving infrequently-queried tables to the Basic Logs tier
• Archive or off-load older data
– Export cooled data to a Storage Account or use Azure Data Explorer externaldata queries
• Set the workspace daily cap if you haven’t already
– Workspace → Usage + estimated costs → Daily cap → On → set your GB/day limit and alert rules
4.Ongoing monitoring & alerts
- Create alerts on workspace ingestion anomalies or “cap reached” events so you get notified immediately
- Regularly review Usage + estimated costs to catch new spikes early
Hope this gives you a roadmap to tame those costs!
References:
- Control data volume & investigate caps (Log Analytics)
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/platform/manage-cost-storage#daily-data-volume
- Sampling in Application Insights
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/app/opentelemetry-sampling
- Data Collection Rules in Azure Monitor
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/data-collection-rule-overview
- Configure table-level retention & archive
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/data-retention-configure
- Moving tables to Basic Logs tier
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/data-platform-logs#table-plans
- Filter logs via Diagnostic Settings
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/diagnostic-settings
- Export/Archive old logs (Storage Account, externaldata)
https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/archive-and-delete-logs