Share via

High Log Analytics cost even after setting Application Insights daily cap (30 GB)

Swaroop Gowda D 25 Reputation points
2026-03-04T07:31:17.11+00:00

Hello,

I have configured Application Insights for several Azure resources, and it is connected to a Log Analytics Workspace for centralized logging.

To control ingestion, I configured the daily cap for Application Insights to 30 GB. However, even after setting this limit, we are still seeing very high costs in the Azure billing related to Log Analytics/Application Insights.

I would like to understand the following:

  1. Does the Application Insights daily cap limit only telemetry ingestion into Application Insights, while Log Analytics ingestion can still generate additional charges?

Could other Azure resources sending logs to the same Log Analytics workspace be contributing to the high cost?

Are there recommended best practices to reduce Log Analytics ingestion cost when using Application Insights, such as:

Sampling configuration

  Table-level retention changes
  
     Moving some tables to **Basic Logs**
     
        Filtering telemetry before ingestion
        

Currently:

Application Insights is connected to Log Analytics Workspace

Daily cap: 30 GB

Retention is set to the default configuration

Could you please suggest best practices or recommended configurations to optimize the logging cost in this scenario?

Thank you.

Azure Analysis Services
0 comments No comments

2 answers

Sort by: Most helpful
  1. Siva shunmugam Nadessin 7,915 Reputation points Microsoft External Staff Moderator
    2026-03-12T08:51:43.8333333+00:00

    Hello Swaroop Gowda D,

    Thank you for reaching out to the Microsoft Q&A forum.

    you’ve hit on a key point: the 30 GB “daily cap” you set on your Application Insights resource only throttles what AI itself will ingest. Once AI ships its data into your Log Analytics workspace, it’s subject to that workspace’s own ingestion (and billing) settings. Any other service or diagnostic setting pointing at that same workspace will add to your daily ingest total and drive up costs.

    Here’s a quick summary and some cost-saving recommendations:

    1.Understand the two caps are separate

    • Application Insights cap → limits only the AI SDK/agent data
    • Log Analytics cap → limits everything landing in that workspace (AI + other Azure diagnostics + custom logs)

    2.Identify what’s chewing up your budget

    • In the Azure portal, go into your Log Analytics workspace → Usage + estimated costs → Daily usage → see which tables/solutions spike the most
    • Under Settings → Solutions, review each solution’s Usage + estimated costs and consider disabling or capping noisy ones

    3.Cost-optimization best practices

    • Sampling in Application Insights

    – Enable adaptive or fixed sampling in your SDK to drop a percentage of low-value telemetry before it’s sent

    – More on sampling: https://dori-uw-1.kuma-moon.com/azure/azure-monitor/app/opentelemetry-sampling

    • Filter at the source with Diagnostic Settings or Data Collection Rules (DCRs)

    – Only send the categories/levels of logs you truly need (e.g. drop “Verbose” or health-check chatter)

    – DCR docs: https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/data-collection-rule-overview

    • Per-solution daily quotas in Log Analytics

    – Under Workspace → Settings → Solutions → Usage + estimated costs → Edit daily quota

    • Table-level retention & tiering

    – In Workspace → Usage + estimated costs → Data retention, you can lower default days

    – For individual tables, go to Workspace → Tables → select a table → Manage table → Data retention settings

    – Consider moving infrequently-queried tables to the Basic Logs tier

    • Archive or off-load older data

    – Export cooled data to a Storage Account or use Azure Data Explorer externaldata queries

    • Set the workspace daily cap if you haven’t already

    – Workspace → Usage + estimated costs → Daily cap → On → set your GB/day limit and alert rules

    4.Ongoing monitoring & alerts

    • Create alerts on workspace ingestion anomalies or “cap reached” events so you get notified immediately
    • Regularly review Usage + estimated costs to catch new spikes early

    Hope this gives you a roadmap to tame those costs!

    References:

    1. Control data volume & investigate caps (Log Analytics)

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/platform/manage-cost-storage#daily-data-volume

    1. Sampling in Application Insights

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/app/opentelemetry-sampling

    1. Data Collection Rules in Azure Monitor

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/data-collection-rule-overview

    1. Configure table-level retention & archive

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/data-retention-configure

    1. Moving tables to Basic Logs tier

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/data-platform-logs#table-plans

    1. Filter logs via Diagnostic Settings

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/essentials/diagnostic-settings

    1. Export/Archive old logs (Storage Account, externaldata)

    https://dori-uw-1.kuma-moon.com/azure/azure-monitor/logs/archive-and-delete-logs

     


  2. Q&A Assist
    2026-03-04T07:31:39.27+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Yes. The Application Insights daily cap and the Log Analytics workspace ingestion are controlled separately, and other data sources can still drive high Log Analytics cost even if the Application Insights daily cap is set to 30 GB.

    Key points for this scenario:

    1. Relationship between Application Insights daily cap and Log Analytics costs
    • Application Insights and Log Analytics now have separate daily cap implementations.
      • For workspace-based Application Insights, the effective cap is the minimum of:
        • The Application Insights daily cap, and
        • The Log Analytics workspace daily cap.
      • For classic Application Insights (not workspace-based), only the Application Insights daily cap applies because data does not reside in a workspace.
    • If only the Application Insights daily cap is configured and the Log Analytics workspace daily cap is not, then:
      • Application Insights telemetry is limited by its own cap.
      • Other Azure resources sending data to the same Log Analytics workspace (VMs, PaaS services, diagnostics, Activity logs, etc.) can still ingest unlimited data (subject to workspace limits) and generate additional Log Analytics charges.
    1. Other resources contributing to high Log Analytics cost
    • Any resource configured to send logs/metrics to the same Log Analytics workspace can contribute to ingestion volume and cost, independently of the Application Insights daily cap.
    • Also, if Application Insights is using the global ingestion endpoint with only an instrumentation key (ikey), the Application Insights daily cap might not be effective across regions, but the Log Analytics daily cap still applies.
    1. Best practices to reduce Log Analytics ingestion cost when using Application Insights

    Use a combination of caps, sampling, filtering, and retention tuning.

    A. Configure both Application Insights and Log Analytics daily caps

    • Set the Application Insights daily cap on each Application Insights resource:
      • Azure portal → Application Insights resource → ConfigureUsage and estimated costsDaily cap → set Daily volume cap (GB/day).
    • Set the Log Analytics workspace daily cap:
      • Azure portal → Log Analytics workspaces → select workspace → SettingsUsage and estimated costsDaily cap → turn On and set Daily volume cap (GB/day).
    • For workspace-based Application Insights, the lower of the two caps is the effective limit for Application Insights data. The workspace cap also limits all other data sources to that workspace.

    B. Use sampling to reduce Application Insights ingestion

    • Sampling is the primary mechanism to tune Application Insights data volume to the desired level and should be used before relying on caps.
    • Recommendation from the documentation:
      • Use sampling to reduce traffic and storage costs while preserving statistically correct analysis.
      • Use the daily cap as a safety net in case the application suddenly sends much higher volumes of telemetry.

    C. Analyze which tables and resources are driving ingestion

    • In the Log Analytics workspace, use queries to identify high-volume tables and sources, for example:
      • Compare record counts per table over a period (for example, 7 days) to find the noisiest tables.
      • Analyze by consumed bytes to see which tables cost the most.
    • Use the Log Analytics Workspace Insights → Usage workbook to see:
      • Percentage of ingestion per table.
      • Per-resource ingestion into the workspace.
    • After identifying the heavy tables (for example, traces, requests, dependencies, custom logs), adjust logging levels or sampling for those sources.

    D. Use data collection rules (DCR) to filter before ingestion

    • Data collection rules can reduce the volume of telemetry sent from Application Insights to the Log Analytics workspace by filtering or transforming data.
    • If a DCR is in place, the workspace may receive less data than the workspace daily cap, which helps control cost.
    • DCRs can also be used to filter other data sources (for example, platform logs) before they reach the workspace.

    E. Retention and log types (Basic vs Analytics)

    • Retention beyond the included period in Log Analytics is charged per GB-month. Reducing retention for high-volume tables can lower cost.
    • Moving less frequently queried or long-term data to cheaper log types (such as Basic Logs) and/or exporting to external storage (for example, Blob Storage) is a common strategy, though specific Basic Logs configuration is not detailed in the provided context.

    F. Cost-optimization strategy summary

    • Set a daily cap on both Application Insights and the Log Analytics workspace to bound maximum daily spend.
    • Set alerts when the daily cap is reached so that teams know data is being dropped and can adjust telemetry.
    • Use sampling and/or DCR-based filtering to proactively reduce ingestion volume.
    • Review Azure Monitor pricing and use the pricing calculator to model costs.
    • Continuously monitor usage (Cost Management + Log Analytics usage workbooks) and refine sampling, filtering, and retention.

    In the described setup (Application Insights → Log Analytics, 30 GB AI daily cap, default retention), high costs are likely due to:

    • Other resources sending large volumes of data to the same workspace, and/or
    • No daily cap configured on the Log Analytics workspace, allowing ingestion beyond what the Application Insights cap limits.

    Configuring a Log Analytics daily cap, enabling sampling, and using DCRs and retention tuning are the recommended next steps to optimize cost.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.