-
Notifications
You must be signed in to change notification settings - Fork 1
Description
MongoDB Database Size Analysis Report: Preventing Oversized Database
Project: WikiLoop DoubleCheck
Database: MongoDB Atlas (heroku_00w6ld63)
Analysis Date: January 2025
Report Version: 1.0
Executive Summary
This comprehensive analysis investigates why our MongoDB Atlas database has reached 5GB in size, breaking down storage usage across all collections and system components. The investigation reveals that while application data accounts for 3.84GB, the MongoDB replica set's oplog contributes an additional 2.87GB, explaining the total 5GB reported by Atlas.
Key Findings:
- Total Atlas Storage: 5.27 GB
- Application Data: 3.84 GB (72.8%)
- System Components (Oplog): 2.87 GB (54.4% of app data)
- Top 2 Collections: Account for 93.3% of application data
- Primary Concern:
WatchCollection_WIKITRUSTandFeedRevisionare consuming excessive storage
Database Architecture Overview
graph TD
A["MongoDB Atlas<br/>Reported: 5 GB"] --> B["Actual Storage Analysis<br/>Total: 5.27 GB"]
B --> C["heroku_00w6ld63<br/>Main Database: 2.39 GB"]
B --> D["local Database<br/>oplog.rs: 2.87 GB"]
B --> E["Other Databases<br/>0.01 GB"]
C --> F["Data Size: 3.84 GB<br/>Storage Size: 1.22 GB<br/>Index Size: 1.20 GB"]
F --> G["WatchCollection_WIKITRUST<br/>2.51 GB (63.8%)"]
F --> H["FeedRevision<br/>1.16 GB (29.5%)"]
F --> I["Other Collections<br/>0.17 GB (6.7%)"]
D --> J["Replica Set Operation Log<br/>Used for data synchronization<br/>and fault recovery"]
style A fill:#ff9999
style B fill:#99ccff
style D fill:#ffcc99
style G fill:#ffff99
style H fill:#ccffcc
Detailed Storage Analysis
1. Atlas Storage Distribution
pie title Atlas MongoDB Storage Distribution (Total: 5.27 GB)
"heroku_00w6ld63<br/>Main Database" : 2.387
"local Database<br/>(oplog.rs)" : 2.873
"wikiloop-doublecheck-prod-db<br/>Other Database" : 0.008
"System Overhead" : 0.001
2. Main Database Collection Breakdown
pie title heroku_00w6ld63 Main Database Internal Distribution (3.84 GB Data)
"WatchCollection_WIKITRUST" : 2510
"FeedRevision" : 1160
"WatchCollection_LASTBAD" : 123
"Interaction" : 81
"RevisionInfo" : 25
"Other Collections" : 41
3. Top 10 Collections by Size
| Rank | Collection Name | Data Size | Storage Size | Document Count | Avg Doc Size | Percentage |
|---|---|---|---|---|---|---|
| 1 | WatchCollection_WIKITRUST | 2,510.34 MB | 750.60 MB | 17,824,275 | 147.68 bytes | 63.84% |
| 2 | FeedRevision | 1,159.74 MB | 304.29 MB | 4,989,111 | 243.75 bytes | 29.49% |
| 3 | WatchCollection_LASTBAD | 122.68 MB | 75.10 MB | 41,024 | 3,135.63 bytes | 3.12% |
| 4 | Interaction | 80.73 MB | 26.41 MB | 321,173 | 263.58 bytes | 2.05% |
| 5 | RevisionInfo | 25.02 MB | 13.85 MB | 81,232 | 322.95 bytes | 0.64% |
| 6 | Sockets | 17.48 MB | 11.90 MB | 259,892 | 70.54 bytes | 0.44% |
| 7 | Sessions | 8.88 MB | 59.79 MB | 52,590 | 177.07 bytes | 0.23% |
| 8 | WatchCollection_US2020 | 4.39 MB | 2.73 MB | 54 | 85,189.94 bytes | 0.11% |
| 9 | FeedPage | 2.21 MB | 0.87 MB | 12,627 | 183.28 bytes | 0.06% |
| 10 | WikiActions | 0.47 MB | 0.20 MB | 2,451 | 202.02 bytes | 0.01% |
Root Cause Analysis
Why Atlas Reports 5GB vs Our 3.84GB Analysis
The discrepancy between Atlas-reported 5GB and our application data analysis (3.84GB) is explained by MongoDB's architecture:
1. Oplog (Operation Log) - 2.87 GB
- Purpose: Replica set synchronization and fault recovery
- Type: Capped collection with fixed size (7.44 GB capacity)
- Current Usage: 2.87 GB (74.7% of main database size)
- Impact: Critical system component, cannot be directly cleaned
2. Storage Compression
- Raw Data Size: 3.84 GB
- Compressed Storage: 1.22 GB (68% compression ratio)
- Atlas Billing: Based on allocated space, not compressed size
3. System Databases
Database: heroku_00w6ld63 - 2.387 GB (Main application data)
Database: local (oplog.rs) - 2.873 GB (Replica set oplog)
Database: wikiloop-doublecheck-* - 0.008 GB (Secondary data)
Database: admin - 0.000 GB (System metadata)
Critical Issues Identified
1. WatchCollection_WIKITRUST Dominance
- Size: 2.51 GB (63.8% of all data)
- Documents: 17.8+ million records
- Growth Pattern: Continuous accumulation of Wikipedia trust scores
- Issue: No apparent data lifecycle management
2. FeedRevision Accumulation
- Size: 1.16 GB (29.5% of all data)
- Documents: 5+ million feed revision records
- Growth Pattern: Rapid accumulation of revision feed data
- Issue: Lacks time-based cleanup strategy
3. Oplog Size Anomaly
- Ratio: 74.7% of main database size (typically 5-15%)
- Indication: High write operation frequency
- Configuration: 7.44 GB capacity (seems oversized for workload)
Document Structure Analysis
WatchCollection_WIKITRUST Sample Document
{
"_id": { "_bsontype": "ObjectId", "id": "..." },
"feed": "<string>",
"wiki": "<string>",
"revIds": ["<array of revision IDs>"],
"title": "<string>",
"feedRankScore": "<string>",
"pageId": "<number>"
}FeedRevision Sample Document
{
"_id": { "_bsontype": "ObjectId", "id": "..." },
"title": "<string>",
"createdAt": "<date object>",
"feed": "<string>",
"feedRankScore": "<number>",
"wiki": "<string>",
"wikiRevId": "<string>",
"claimExpiresAt": "<date object>",
"claimerInfo": {
"userGaId": "<string>",
"wikiUserName": "<object>",
"claimedAt": "<date>"
}
}Optimization Recommendations
Immediate Actions (1-2 weeks)
1. WatchCollection_WIKITRUST Cleanup
// Remove records older than 6 months
const sixMonthsAgo = new Date();
sixMonthsAgo.setMonth(sixMonthsAgo.getMonth() - 6);
const result = await db.WatchCollection_WIKITRUST.deleteMany({
createdAt: { $lt: sixMonthsAgo }
});
console.log(`Deleted ${result.deletedCount} old WIKITRUST records`);Expected Impact: 60-70% size reduction (~1.5-1.8 GB savings)
2. FeedRevision TTL Implementation
// Create TTL index for 90-day retention
await db.FeedRevision.createIndex(
{ "createdAt": 1 },
{ expireAfterSeconds: 60 * 60 * 24 * 90 } // 90 days
);Expected Impact: 40-50% size reduction (~500-600 MB savings)
Medium-term Actions (1-2 months)
3. Data Archival Strategy
- Implement cold storage for historical
WatchCollection_WIKITRUSTdata - Set up automated archival processes for data older than 3 months
- Create separate analytics database for long-term historical analysis
4. Write Operation Optimization
// Replace frequent single operations with batch operations
// Before (inefficient):
for (const item of items) {
await collection.insertOne(item);
}
// After (efficient):
await collection.insertMany(items);5. Index Optimization
- Review and remove unused indexes (current index size: 1.20 GB)
- Implement compound indexes for common query patterns
- Regular index usage analysis
Long-term Actions (3-6 months)
6. Atlas Configuration Review
- Contact Atlas support for oplog size optimization
- Review replica set configuration for current workload
- Consider cluster tier adjustment based on actual usage patterns
7. Application Architecture Improvements
- Implement streaming data processing for real-time feeds
- Add data lifecycle policies at application level
- Consider separating hot/cold data into different collections
Monitoring and Prevention Strategy
1. Database Size Monitoring
// Weekly database size check script
const dbStats = await db.stats();
const sizeGB = dbStats.dataSize / 1024 / 1024 / 1024;
if (sizeGB > 4.0) {
console.warn(`Database size warning: ${sizeGB.toFixed(2)} GB`);
// Trigger cleanup procedures
}2. Collection Growth Tracking
- Monitor top 5 collections weekly
- Set alerts for >20% monthly growth
- Track document count vs. average document size trends
3. Oplog Health Monitoring
- Monitor oplog GB/hour metrics in Atlas
- Track oplog time window (should cover at least maintenance windows)
- Alert on oplog usage >80% of configured size
4. Application-level Monitoring
// Track write operations
const writeOpsCounter = {
inserts: 0,
updates: 0,
deletes: 0
};
// Monitor high-frequency operations
const highFrequencyCollections = [
'WatchCollection_WIKITRUST',
'FeedRevision',
'Interaction'
];Financial Impact Analysis
Current Costs (Estimated)
- Atlas M10 Cluster: ~$57/month (for 5GB usage)
- Projected Growth: 15-20% monthly without intervention
- 12-month Projection: 8-10 GB (requiring M20 upgrade ~$97/month)
Expected Savings with Optimization
- Immediate Cleanup: 2-2.5 GB reduction → Stay on M10
- Long-term Strategy: Maintain <3 GB → Potential downgrade to M5 (~$25/month)
- Annual Savings: $400-800 through proper data lifecycle management
Implementation Timeline
Phase 1: Emergency Cleanup (Week 1-2)
- Backup current database
- Implement WatchCollection_WIKITRUST historical data cleanup
- Add TTL indexes to FeedRevision
- Monitor size reduction
Phase 2: Systematic Optimization (Week 3-8)
- Contact Atlas support for oplog optimization
- Implement batch operation patterns
- Set up monitoring dashboards
- Create data archival procedures
Phase 3: Long-term Prevention (Month 3-6)
- Deploy automated cleanup scripts
- Implement comprehensive monitoring
- Review and optimize Atlas configuration
- Document data lifecycle policies
Risk Assessment
High Risk
- Data Loss: Aggressive cleanup without proper backup
- Application Downtime: During major cleanup operations
- Performance Impact: Large delete operations on production
Mitigation Strategies
- Staged Rollouts: Clean data in batches during low-traffic periods
- Comprehensive Backups: Full backup before any cleanup operation
- Performance Testing: Test cleanup operations on staging environment first
Success Metrics
Target Outcomes (3 months)
- Database Size: <3 GB total Atlas usage
- Collection Efficiency: No single collection >40% of total size
- Oplog Ratio: <30% of main database size
- Growth Rate: <5% monthly sustainable growth
KPIs to Monitor
- Monthly database size growth rate
- Top 5 collections size distribution
- Average document size trends
- Index efficiency ratios
Conclusion
The MongoDB Atlas 5GB usage is primarily driven by two collections (WatchCollection_WIKITRUST and FeedRevision) that lack proper data lifecycle management, combined with a proportionally large oplog. Through systematic cleanup and implementation of data retention policies, we can reduce database size by 50-60% while establishing sustainable growth patterns.
The root cause is not technical debt but rather the absence of data lifecycle policies in a rapidly growing application. With proper implementation of the recommended strategies, the database can be maintained at 2-3 GB with improved performance and reduced costs.
Next Steps:
- Schedule maintenance window for Phase 1 cleanup
- Contact Atlas support for oplog optimization consultation
- Begin implementation of monitoring systems
- Review and approve data retention policies
Document Prepared By: Database Analysis Team
Review Required By: Infrastructure Team, Product Team
Implementation Authorization: CTO/Engineering Manager
Appendix
$ npm run dev
> wikiloop-doublecheck@4.1.0 dev /Users/zzn/ws/@wikiloop/doublecheck
> cross-env NODE_ENV=development nodemon
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): server/**/* cron mailer/**/* shared/**/*
[nodemon] watching extensions: ts
[nodemon] starting `npm run dev:exec`
> wikiloop-doublecheck@4.1.0 dev:exec /Users/zzn/ws/@wikiloop/doublecheck
> ts-node -r tsconfig-paths/register --project tsconfig.json ./server/index.ts
DotEnv envPath = template.env if you want to change it, restart and set DOTENV_PATH
[info] AXIOS Axios: setup timing monitoring
Loading locale file af.yml, as af
Loading locale file ar.yml, as ar
Loading locale file bg.yml, as bg
Loading locale file ca.yml, as ca
Loading locale file cs.yml, as cs
Loading locale file de.yml, as de
Loading locale file en.yml, as en
Loading locale file es.yml, as es
Loading locale file fa.yml, as fa
Loading locale file fr.yml, as fr
Loading locale file he.yml, as he
Loading locale file id.yml, as id
Loading locale file it.yml, as it
Loading locale file ja.yml, as ja
Loading locale file ko.yml, as ko
Loading locale file lv.yml, as lv
Loading locale file nl.yml, as nl
Loading locale file pl.yml, as pl
Loading locale file pt.yml, as pt
Loading locale file ru.yml, as ru
Loading locale file sv.yml, as sv
Loading locale file th.yml, as th
Loading locale file tr.yml, as tr
Loading locale file uk.yml, as uk
Loading locale file zh.yml, as zh
=================================
nuxt.config.js is being executed!
nuxt.config.js is done executed!
=================================
DotEnv envPath = template.env if you want to change it, restart and set DOTENV_PATH
Connecting mongodb ...
(node:7125) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
Connected mongodb!
WARN vendor has been deprecated due to webpack4 optimization 20:10:17
[info] Running Nuxt Builder ...
ℹ Preparing project for development 20:10:20
ℹ Initial build may take a while 20:10:20
✔ Builder initialized 20:10:21
[nuxt-i18n] Error parsing "nuxtI18n" component option in file "/Users/zzn/ws/@wikiloop/doublecheck/pages/feed2.vue".
✔ Nuxt files generated 20:10:22
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Why you should do it regularly:
https://github.com/browserslist/browserslist#browsers-data-updating
ℹ Starting type checking service... nuxt:typescript 20:10:23
✔ Client
Compiled successfully in 40.51s
✔ Server
Compiled successfully in 40.17s
(node:7125) DeprecationWarning: Tapable.plugin is deprecated. Use new API on `.hooks` instead
ℹ No type errors found nuxt:typescript 20:11:03
ℹ Version: typescript 4.1.2 nuxt:typescript 20:11:03
ℹ Time: 13943 ms nuxt:typescript 20:11:03
ℹ Waiting for file changes 20:11:04
[info] DONE ...
READY Server listening on http://0.0.0.0:3000 20:11:04
[info] Ingestion enabled
[info] Skipping Barnstar cronjobs because of lack of CRON_BARNSTAR_TIMES which is:
[info] Setting up CronJob for traversing category tree to run at 0 0 * * * *
[info] Setting up CronJob for populating feed revisions to run at 0 */10 * * * *
[info] Hook postToJade is installed
[info] Installing discord webhook for id=712098819994812506, token=TNs...
[info] Hook postToDiscord is installed
[info] PERF 6ms 200 GET /api/flags ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 4ms 200 GET /api/version ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 337ms 200 GET /api/metrics ga_id=GA1.1.1564697284.1737420619 session_id=
nuxtServerInit store state clearProfile because req.user is not defined
There is no req.session
nuxtServerInit done
[info] PERF 945ms 200 GET / ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 13ms 200 GET /_nuxt/runtime.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 66ms 200 GET /_nuxt/pages/index.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 67ms 200 GET /_nuxt/pages/feed/_feed/pages/index/pages/revision/_wiki/_revid.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 584ms 200 GET /_nuxt/commons/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 593ms 200 GET /_nuxt/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 4ms 200 GET /_nuxt/runtime.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 4ms 200 GET /_nuxt/pages/feed/_feed/pages/index/pages/revision/_wiki/_revid.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 5ms 200 GET /_nuxt/pages/index.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 47ms 200 GET /_nuxt/commons/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 679ms 200 GET /_nuxt/vendors/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] A socket client connected. Socket id = fDRWSoznDoVWk7r5AAAA. Total connections =
MongoError: you are over your space quota, using 5149 MB of 5120 MB
at MessageStream.messageHandler (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/connection.js:268:20)
at MessageStream.emit (events.js:314:20)
at MessageStream.EventEmitter.emit (domain.js:483:12)
at processIncomingData (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at MessageStream.Writable.write (_stream_writable.js:318:11)
at TLSSocket.ondata (_stream_readable.js:719:22)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at addChunk (_stream_readable.js:298:12)
at readableAddChunk (_stream_readable.js:273:9)
at TLSSocket.Readable.push (_stream_readable.js:214:10)
at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {
ok: 0,
code: 8000,
codeName: 'AtlasError'
}
Promise {
<rejected> MongoError: you are over your space quota, using 5149 MB of 5120 MB
at MessageStream.messageHandler (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/connection.js:268:20)
at MessageStream.emit (events.js:314:20)
at MessageStream.EventEmitter.emit (domain.js:483:12)
at processIncomingData (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at MessageStream.Writable.write (_stream_writable.js:318:11)
at TLSSocket.ondata (_stream_readable.js:719:22)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at addChunk (_stream_readable.js:298:12)
at readableAddChunk (_stream_readable.js:273:9)
at TLSSocket.Readable.push (_stream_readable.js:214:10)
at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {
ok: 0,
code: 8000,
codeName: 'AtlasError'
}
}
Requesting enwiki for Media Action API: https://en.wikipedia.org/w/api.php?action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=newer&rcstart=1749784277
Try sandbox request here: https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=newer&rcstart=1749784277
[info] Received userIdInfo userGaId=GA1.1.1564697284.1737420619
[info] PERF 195ms 200 GET /api/notice/list?userGaId=GA1.1.1564697284.1737420619 ga_id=GA1.1.1564697284.1737420619 session_id=
MongoError: you are over your space quota, using 5149 MB of 5120 MB
at MessageStream.messageHandler (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/connection.js:268:20)
at MessageStream.emit (events.js:314:20)
at MessageStream.EventEmitter.emit (domain.js:483:12)
at processIncomingData (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at MessageStream.Writable.write (_stream_writable.js:318:11)
at TLSSocket.ondata (_stream_readable.js:719:22)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at addChunk (_stream_readable.js:298:12)
at readableAddChunk (_stream_readable.js:273:9)
at TLSSocket.Readable.push (_stream_readable.js:214:10)
at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {
ok: 0,
code: 8000,
codeName: 'AtlasError'
}
Promise {
<rejected> MongoError: you are over your space quota, using 5149 MB of 5120 MB
at MessageStream.messageHandler (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/connection.js:268:20)
at MessageStream.emit (events.js:314:20)
at MessageStream.EventEmitter.emit (domain.js:483:12)
at processIncomingData (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/Users/zzn/ws/@wikiloop/doublecheck/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at MessageStream.Writable.write (_stream_writable.js:318:11)
at TLSSocket.ondata (_stream_readable.js:719:22)
at TLSSocket.emit (events.js:314:20)
at TLSSocket.EventEmitter.emit (domain.js:483:12)
at addChunk (_stream_readable.js:298:12)
at readableAddChunk (_stream_readable.js:273:9)
at TLSSocket.Readable.push (_stream_readable.js:214:10)
at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {
ok: 0,
code: 8000,
codeName: 'AtlasError'
}
}
[info] PERF 173ms 200 GET /favicon.ico ga_id=GA1.1.1564697284.1737420619 session_id=
[info] AXIOS 186ms 200 get https://en.wikipedia.org/w/api.php?action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=newer&rcstart=1749784277
[info] PERF 669ms 200 GET /api/recentchanges/list?wiki=enwiki&limit=500&direction=newer×tamp=1749784277 ga_id=GA1.1.1564697284.1737420619 session_id=
Requesting enwiki for Media Action API: https://en.wikipedia.org/w/api.php?action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=older&rcstart=0
Try sandbox request here: https://en.wikipedia.org/wiki/Special:ApiSandbox#action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=older&rcstart=0
[info] PERF 8ms 200 GET /_nuxt/manifest.5b94d973.json ga_id= session_id=
[info] PERF 3ms 200 GET /_nuxt/icons/icon_64.a0020000000.png ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 2ms 200 GET /_nuxt/icons/icon_144.a0020000000.png ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 3ms 304 GET /favicon.ico ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 1405ms 200 GET /_nuxt/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] AXIOS 492ms 200 get https://en.wikipedia.org/w/api.php?action=query&format=json&list=recentchanges&formatversion=2&rcnamespace=0&rcprop=title%7Ctimestamp%7Cids%7Coresscores%7Cflags%7Ctags%7Csizes%7Ccomment%7Cuser&rcshow=%21bot&rclimit=500&rctype=edit&rctoponly=1&origin=*&rcdir=older&rcstart=0
[info] PERF 723ms 200 GET /api/recentchanges/list?wiki=enwiki&limit=500&direction=older×tamp=0 ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 81ms 200 GET /api/interaction/enwiki:1295326986 ga_id=GA1.1.1564697284.1737420619 session_id=
[info] Start requesting link parsing, with timeout = 5000
[info] Start requesting link parsing, with timeout = 5000
[info] PERF 133ms 200 GET /_nuxt/vendors/app.js ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 1386ms 200 GET /api/diff/enwiki:1295326986 ga_id=GA1.1.1564697284.1737420619 session_id=
[info] PERF 2301ms 200 GET /api/diff/enwiki:1295326912 ga_id=GA1.1.1564697284.1737420619 session_id=