The AI war is not over yet
The giants have a potentially fatal weakness. Being unaligned with individuals. It just needs exploiting.
No TL;DR this time, sorry xxx
At the end of my last essay, we gave up on hope. Hope isn’t good enough when the stakes are this high.
The only way to ensure that future AIs have incentives that are aligned with the individuals that interact with them, is by building an incentive structure. One powerful enough to force the organisations that build AI to have no other choice but to align with individuals.
How can you force Facebook, Google, Amazon, Twitter, Snap and TikTok to all swing their priorities away from serving advertisers to serving individuals?
You lower switching costs, making platform-exit a real option for individuals and therefore making anti-user behaviour extremely costly.
Forcing AI organisations to align incentives with individuals
We need to give individuals the power to enforce accountability on companies and AI systems that are not aligned with individuals.
Other than accountability through democracy, it seems like there aren’t many ways individuals can exercise power against the tech giants.
The way to give individuals that power is to build tools that help them to easily understand and control what data about them AIs/companies can access. Given that AIs need data to know us well & perform successfully, if individuals only give data where they benefit and can revoke access en-masse when they don’t, then AIs have to be aligned with users or they won’t be able to compete.
The ability for individuals to withdraw their data en-masse would be a powerful accountability mechanism & incentive to do what’s best for individuals. If we can build that future where individuals have fluid, granular control of the data companies collect about them & that AIs use, we’re 90% of the way to achieving AI-individual alignment, because we have created the right incentive-system.
The path to fluid, granular data control
But how could individuals control what AIs know about them? Most people don’t even know what companies like Facebook, and Google know about them. Just understanding what companies and their AI systems know about you is hard. Data isn’t easy to understand & companies don’t *want* to tell you.
But before you truly control *anything* you need to understand it. You need to know:
what data exists about you
who has it
what does it mean
what do they use it for
Then, once you understand it, you can:
reclaim data to keep for yourself
decide who gets to access it
decide what they can use it for
delete it from anywhere you want
That’s a start, and would force these companies to perk up in their respect for individuals as humans, not just as “users”.
But even then you still don’t have true autonomy over your data. You’re fighting with companies over what *they* do with it. There’s a fundamental constraint on the possibilities of what your data could be used for, which is the incentives of the companies that hold it.
When individuals have control
If there was a company that didn’t have outside incentives for they use your data, that only served you, then your incentives would be aligned. Imagine if your data was no longer constrained by the imagination and incentives of ad platforms like Facebook and Google. What happens when the primary purpose of your data existing isn’t to make money selling you ads? What happens when data is broken out of corporate silos where it sits providing only a partial picture of who you are? to represent the full you?
It’s hard to grasp the magnitude of this shift except by analogy. When computing was only for businesses, the question was asked: but how would this be useful to consumers? A personal computer is pointless, nothing to use it for. The software hadn’t been built to make them useful, because there were no personal computers or users to build it for. 🐔 & 🥚
Right now businesses use personal data for their own ends. There are no consumer use-cases other than ad-driven feeds, beyond a hobbyist fringe tinkering with quantified self. For students of technology/Apple history: a hobbyist fringe tinkering with a tool that businesses find extremely valuable & powerful. Sound familiar?
New tools: from businesses to consumers
With new and powerful tools, businesses come first. They have the willingness to pay & the urgent need. But: consumer companies grow bigger.
Computing: IBM (1911) < Apple (1976)
Social Networking: LinkedIn (2002) < Facebook (2004)
Data infrastructure: Segment (2011) < ???
When personal data gets centred around the user, and users are empowered to choose what it gets used for, the use-cases will explode. Data will become a tool for thought, introspection, self-improvement, health, finding friends and partners, and for building personalised AI. Data will make us superhuman.
Who will win the AI war
There is a huge amount of responsibility placed on the company entrusted with the user’s data. Sufficiently good and complete data will give that company the power to manipulate users even more than Facebook and Google can do now. Do you really want that power to be held by companies selling ads?
Trust becomes the key differentiator in this world. Individuals need to trust that the company behind the AI puts their goals first. That the company wins when they win.
The real question here is: if you *can* control what AIs/companies know about you, will you let any AI access all your data? An AI system with all your data that’s aligned perfectly with you sounds incredibly useful. But who could you trust to make one? It’s difficult to pick a company - only Apple is making a case for themselves. Really you could only trust a company that has no external incentives. A company built entirely around you.
If they are funded by advertisers targeting you with ads - NOPE!
If they haven’t been transparent with how they use your data - NOPE!
i.e. ultimately this is no company we currently associate with AI today.
This company would need to be built from the ground up to be focused around serving the human user. Its revenue would need to come directly from users themselves - giving it no outside incentives. It would need to be the most transparent and trusted company in the world. It would give users true control. By being the most user-focused ethical, trusted company, it would have an unassailable comparative advantage over companies with baggage.
The company would need to stake its reputation on its pro-user behaviour.
No platitudes, instead: extreme commitment. Not “Don’t be evil”, but “Be Good”. Prioritising individuals’ autonomy, joy, and self-actualisation above profit. By being so pro-user, ironically it could become the most valuable company in the world.
Time to shill what I’m building
You might have guessed, but that’s the company we’re building at @GetEthi. We're helping individuals to understand and control their data, and we'll shortly be releasing our first "use-case" for people's data.
It's definitely a bit ridiculous to claim we might one-day be the most trusted company in the world, and that we have a chance in hell of building the best AI, but Sam Altman always points out that it is nearly always better to take the more ambitious path. So that’s the plan.
We’ve called our company Ethi for a reason. Here are our extreme commitments:
Ethi is short for ethical, and we want to be held to that standard.
We will make no revenue that isn’t from our users.
Your data will only be used in ways you clearly consent to, that benefit you.
Our motto is “Be Good”.
We’ll be competing with the most valuable companies in the world. The most powerful companies in the world. Our existence is a direct critique of how Facebook, Google, etc. have done business. They can’t even copy us because they’re too locked into their ad-driven business model. This is what’s called “counter-positioning”.
Personally, I’m so excited to be building this company. Putting individuals in control of their data is my life’s mission. It’s the best work I could possible be doing. If Ethi succeeds in our grandest mission, it will ensure that individuals remain autonomous in a world where AIs grow more powerful and exponentially increasing amounts of data get collected about us. If individuals can understand, control and use their data how they see fit, the chance of AI-individual alignment will be hugely increased, & individuals will have data-enhanced superpowers in understanding themselves, connecting with others, & navigating the world’s complexity.
If I’ve done my job right in this essay and the last one, you’re hyped and you want to join us on this journey. Sign up at https://ethi.me to get started controlling your data. This is just the beginning.
DMs, replies, thoughts, feedback, all appreciated, email is mike at ethi.me, follow me on Twitter @michaeljelly.
At Ethi, we help you to understand and control the data that companies collect about you. We’re making it easier for you to extract yourself from Facebook, Google and ore. We hope to build an accountability mechanism that makes sure the AIs that win will serve users.
You are not the product at Ethi, you are our customer - our only customer! This aligns our incentives with yours, and that we win when you win. We think if data collection and data-use remain as opaque as they currently are, users will never have the information or tools they need to leave platforms that exploit them.