- Published on
Lighthouse AI Agent: Automated Web Performance Analysis
- Authors
- Name
- Ajinkya Kunjir
Introduction
In today's competitive digital landscape, website performance is no longer optional—it's essential. Slow-loading pages frustrate users, harm SEO rankings, and ultimately impact your bottom line. While Google Lighthouse provides excellent performance auditing capabilities, interpreting those results and determining prioritized action steps often requires expert knowledge.
Enter the Lighthouse AI Agent: a powerful tool that combines the analytical capabilities of Google Lighthouse with the intelligence of AI models like OpenAI's GPT-4 and Anthropic's Claude to deliver automated, comprehensive web performance analysis.
- What is the Lighthouse AI Agent?
- Key Features
- How It Works
- Getting Started
- Real-World Example
- Benefits Over Manual Analysis
- Conclusion
- Get Involved
- License
What is the Lighthouse AI Agent?

The Lighthouse AI Agent is an open-source Node.js tool that automates the entire process of web performance analysis:
- It runs Google Lighthouse audits on multiple URLs simultaneously
- Captures detailed performance metrics and opportunities for improvement
- Processes this technical data through advanced AI models
- Generates human-readable insights and prioritized recommendations
Unlike traditional Lighthouse reports that require technical expertise to interpret, this tool translates complex performance data into clear, actionable insights that development teams and non-technical stakeholders can easily understand.
Key Features
Multiple Testing Configurations
The Lighthouse AI Agent supports different testing configurations to ensure comprehensive coverage:
- Mobile emulation with throttling: Test how your site performs on mobile devices with constrained network conditions
- Desktop/Web standard audits: Evaluate performance in standard desktop environments
- Vercel-protected URLs: Special support for bypass tokens to test preview deployments or protected routes
AI-Powered Analysis
What sets this tool apart is its integration with cutting-edge AI models:
- OpenAI's GPT-4: Provides concise, prioritized insights focused on the most impactful improvements
- Anthropic's Claude: Delivers comprehensive analysis with detailed reasoning and contextual recommendations
The AI doesn't just summarize the Lighthouse data—it identifies patterns across multiple pages, spots common issues, and suggests holistic improvements for your entire site.
Comprehensive Reporting
Each audit generates:
- Individual JSON and HTML reports for technical deep-dives
- Summary reports with performance scores and metrics
- AI-generated analysis documents with actionable recommendations
- Comparative data across multiple URLs to identify site-wide patterns
How It Works
Architecture Overview
The Lighthouse AI Agent consists of three main components that work together seamlessly:
URLs (CSV) → Lighthouse Audits → AI Analysis → Actionable Reports
Let's explore each step in detail:
URL Processing
You can clone the repository from GitHub to get started and follow the process outlined below.
First, the agent reads a list of URLs from a simple CSV file:
https://example.com
https://example.com/products
https://example.com/blog
This allows you to test multiple pages in a single run, which is crucial for identifying site-wide patterns and issues.
Lighthouse Auditing
For each URL, the agent:
- Launches a headless Chrome instance via Puppeteer
- Configures the appropriate testing environment (mobile/desktop)
- Applies any necessary authentication headers
- Runs a complete Lighthouse audit
- Extracts key performance metrics and improvement opportunities
The raw Lighthouse data is saved as both JSON (for programmatic access) and HTML (for visual inspection) reports.
AI Analysis
This is where the magic happens. The agent compiles the Lighthouse results and sends them to either OpenAI's GPT-4 or Anthropic's Claude with carefully engineered prompts:
// Example of the AI prompt structure
const prompt = `
You are a web performance expert. Below is a batch of Lighthouse audit results for multiple URLs.
For each URL:
- Briefly summarize the strengths based on scores.
- Mention top 2–3 actionable suggestions from opportunities.
- Note any red flags if present.
At the end, provide:
- A short overall assessment of this batch.
- General performance optimization advice applicable to most pages.
Audit Data:
${results
.map(
(r) => `
URL: ${r.url}
Scores: ${JSON.stringify(r.scores)}
Top Opportunities: ${
r.opportunities
?.slice(0, 3)
.map((o) => `- ${o.title} (${o.displayValue || 'N/A'})`)
.join('\n') || 'N/A'
}
`
)
.join('\n')}
`
The AI processes this data and returns structured insights that highlight patterns, prioritize improvements, and provide context that would typically require a human expert.
Getting Started
Prerequisites
To use the Lighthouse AI Agent, you'll need:
- Node.js (v14.x or later)
- Google Chrome installed
- API keys for either OpenAI or Anthropic (or both)
Installation
Clone the repository:
git clone https://github.com/Hiddensound/LighthouseAI_Agent cd lighthouse-ai-agent
Install dependencies:
npm install puppeteer lighthouse chrome-launcher csv-parse dotenv openai @anthropic-ai/sdk
Create a
.env
file with your API keys:# Choose one or both AI providers OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key # For Vercel protected URLs (if needed) VERCEL_BYPASS_TOKEN=your_vercel_bypass_token
Create your URL list in
urls - Sheet1.csv
Running an Audit
The repository includes three specialized scripts for different use cases:
For mobile audits with OpenAI GPT-4:
node mobiletestruns_lighthouse_ChatGPT_AI.js
For desktop audits with OpenAI GPT-4:
node Desktop_runs_lighthouse_ChatGPT_AI.js
For Vercel-protected URLs with Anthropic Claude:
node Desktopruns_Vercelbypass_AnthropicAI.js
Real-World Example
To demonstrate the power of the Lighthouse AI Agent, let's look at a sample analysis generated for an e-commerce website:
===== AI Summary =====
Overall Performance Assessment:
The analyzed URLs show moderate performance with an average Performance score of 68.3. While Accessibility (89.2) and SEO (91.5) scores are strong, there are significant opportunities to improve loading speed and user experience.
Common Issues:
1. Unused JavaScript - Most pages have 300-500KB of unused JS that could be reduced
2. Image optimization - Several pages have uncompressed or improperly sized images
3. Render-blocking resources - CSS and font loading is delaying initial render
Prioritized Recommendations:
1. Implement code splitting and defer non-critical JavaScript
2. Optimize and properly size all images, consider next-gen formats
3. Extract critical CSS and inline it for above-the-fold content
4. Implement a CDN if not already using one
URL-Specific Notes:
- Homepage (/): Largest Contentful Paint (3.8s) exceeds recommendation. Critical rendering path optimization needed.
- Product page (/products/featured): Has excessive DOM size (2,400 elements) causing interactivity issues.
- Checkout page (/checkout): Shows best overall performance but has render-blocking font resources.
This summary provides clear, actionable insights that both developers and stakeholders can understand without digging through technical reports.
Benefits Over Manual Analysis
The Lighthouse AI Agent offers several advantages over manual performance analysis:
- Time efficiency: Analyze dozens of pages in minutes instead of hours
- Consistency: Apply the same rigorous analysis to every page
- Pattern recognition: Identify site-wide issues that might be missed in individual page analysis
- Accessibility: Translate technical metrics into language everyone can understand
- Prioritization: Focus on high-impact changes first with AI-powered recommendations
Conclusion
The Lighthouse AI Agent represents the next evolution in web performance analysis—combining Google's industry-leading audit tool with the intelligence of modern AI systems. By automating both the collection and interpretation of performance data, it empowers teams to:
- Identify performance issues more quickly
- Prioritize the most impactful optimizations
- Communicate technical needs to non-technical stakeholders
- Track performance improvements over time
As web performance becomes increasingly crucial for user experience and SEO, tools like this will be essential for staying competitive in a fast-paced digital landscape.
Get Involved
This tool is open-source and available on GitHub. Feel free to contribute, suggest improvements, or adapt it to your specific needs. Performance optimization is a community effort, and together we can build faster, more responsive web experiences for everyone.