Skip to main content

What I learned being a top 2% AI user on OpenRouter

· 7 min read
Wesley Phillips
Vancity data analyst

I recently checked my OpenRouter stats and discovered something surprising: I'm in the top 2% of users by token usage. Not because I run an AI company or manage a team of engineers. Just me, using AI to augment my daily life.

Over the year, I've accumulated hundreds of millions of tokens across different models as I upgraded and experimented. That puts me in rare company—and it's made me think deeply about how AI models work together, what they're each good at, and what responsibility comes with that level of usage.

One User, Many Use Cases

Being a top 2% user as a single person isn't about running massive queries for no reason. It's about building systems that compound value every single day.

Here's what's generating all those tokens:

A Daily Newsletter That Actually Gets Read: Every morning, I process multiple newsletters, my personal notes from the previous day, and market data into a single coherent briefing. It's personalized to my interests, includes perspectives from a "council" of AI advisors who challenge my thinking, and gets delivered as an audio podcast while I make coffee. Six months in, I haven't missed a day.

A Trading Recommendation Engine: I built a system that analyzes market data, news, and technical indicators to generate daily trading recommendations. It processes enormous amounts of information and distills it into actionable insights. This isn't passive consumption—it's active decision support that helps me navigate complex markets.

A Custom Intelligence Chatbot: I've built a chatbot that integrates my task lists, calendar, web search capabilities, and personal knowledge base. It doesn't just answer questions—it understands my context, my priorities, and my communication style. It's like having a chief of staff who knows exactly how I think and what I need.

Note Summarization and Automation: Throughout the day, I capture thoughts, meeting notes, article highlights, and random ideas in obsidian using a custom note app I built. As I enter these notes, an automated system summarizes them categorizes them, identifies action items, extracts key insights, and creates connections between related ideas. What used to be a messy collection of fragmented thoughts becomes a structured knowledge base I can actually use. The best part? These summaries feed directly into my morning briefing, creating a continuous loop where yesterday's insights inform today's thinking. It's like having a personal knowledge manager who never sleeps and always finds the thread connecting Tuesday's random shower thought to Friday's important decision.

Each of these systems runs daily. Each one processes significant amounts of data. And together, they explain how one person ends up in the top 2% of AI users.

What I've Learned About Different Models

Here's the thing most people don't realize: different AI models are good at different things, and the magic happens when you use them together.

Gemini Flash is incredibly fast and efficient for high-volume tasks. I use it for the heavy lifting—processing multiple newsletters, summarizing large documents, and handling repetitive analysis. It's not always the most nuanced, but for quick throughput and cost-efficiency, it's unmatched.

Claude has always been my go-to. When I need something written well, when I need careful analysis of complex situations, or when I need AI that truly understands context and subtext, Claude is my choice. Quality, reasoning, and nuance—this is where I spend my time when the task matters.

Grok brings real-time information and a different perspective. It's excellent for market analysis and situations where I need current data integrated with reasoning.

GPT serves as another voice when I want different opinions or perspectives. Sometimes you need to hear how a different model approaches a problem, and GPT provides that alternative viewpoint. But Claude remains my primary choice for most work.

The key insight: I don't use one model for everything. I orchestrate them. Fast models for processing. Smart models for reasoning. Current models for real-time data. Each one playing to its strengths.

The Compounding Value of Daily AI Usage

What separates casual AI usage from top 2% usage isn't running a few big queries. It's building systems that generate value every single day.

My morning briefing alone has probably saved me 150+ hours over six months while making me better informed. My trading engine processes information I couldn't possibly analyze manually. My chatbot eliminates friction from dozens of daily decisions.

This is compounding automation. Each day builds on the last. Each system learns my preferences. Each interaction is more valuable because of all the previous ones.

But There's a Cost We Need to Talk About

Seeing these stats made me realize something uncomfortable: AI usage has an environmental cost, and as a top 2% user, I have a responsibility to acknowledge it.

Running AI models requires massive computational power. Those token counts represent real electricity consumption, real carbon emissions, and real environmental impact. The more I use these tools, the more I benefit from them, the more I need to be honest about that cost.

I've started researching carbon offset options, specifically looking at Offset AI—a service designed to offset the environmental impact of AI usage. I'm not sure yet if carbon offsets are the complete solution, but I know ignoring the problem isn't an option.

If we're going to build lives augmented by AI, if we're going to be in the top 2% of users, we need to think about sustainability. Not as an afterthought, but as a core part of how we use these tools.

What This Means for the Future

Being a top 2% AI user in 2025 feels like being an early internet power user in the late 1990s. Most people are still figuring out what these tools can do. A small group of us are already building integrated systems that fundamentally change how we work and live.

But with that comes responsibility:

  • Using the right model for the right task (efficiency matters)
  • Building systems that create real value, not just using tokens for the sake of it
  • Being honest about the environmental cost and looking for ways to offset it
  • Sharing what we learn so others can benefit without reinventing the wheel

I didn't set out to be a top 2% AI user. I set out to build tools that would make my life better, help me think more clearly, and let me focus on what matters most. The token usage is just a byproduct of building systems that actually work.

If you're thinking about how to use AI more effectively, don't focus on using more tokens. Focus on building systems that compound value over time. Find the tasks you do every day that could be augmented. Pick the right models for the right jobs. And be thoughtful about the resources you're using.

The future isn't about using AI occasionally for one-off tasks. It's about weaving it into the fabric of your daily life in ways that make you more informed, more efficient, and more capable.

Just don't forget to consider the cost—both the value it creates and the impact it has.


What systems could you build that would generate value every single day? What would your life look like if you had AI tools perfectly tuned to how you think and work? And how can we do this responsibly?

Let me know what you think. I'm always interested in how others are using AI to augment their lives.