a16z founder Wanzi Changwen: Why AI will save the world

Author: MarcAndreessen, founder of a16z; translation: Jinse Finance cryptonaitive&ChatGPT

The age of artificial intelligence has arrived, and people are panicking about it.

Fortunately, I'm here to bring good news: AI won't destroy the world, it may in fact save it.

First, a brief introduction to what artificial intelligence is: the process of applying mathematics and software codes to teach computers how to understand, synthesize and generate knowledge, like humans. Artificial intelligence is a program like any other computer program—it runs, takes input, processes it, and generates output. The output of artificial intelligence is very useful in various fields, from programming to medicine, law and creative arts. It is owned and controlled by humans, just like any other technology.

A short description of artificial intelligence: it's not like in the movies where killer software and robots suddenly come alive and decide to slaughter humans or otherwise destroy everything.

A shorter description for artificial intelligence: it could be a way to make everything we care about better.

**Why AI can make everything we care about better? **

A central conclusion of numerous studies conducted in the social sciences over the years is that human intelligence can bring about dramatic improvements in all areas of life. Smarter people achieve better outcomes in almost every area: academic achievement, job performance, career status, income, creativity, physical health, longevity, learning new skills, handling complex tasks, leadership, entrepreneurial success, conflict Solving, Reading Comprehension, Financial Decision Making, Understanding Others' Perspectives, Creative Arts, Parenting Outcomes, and Life Satisfaction.

Furthermore, human intelligence is the lever we have used for millennia to create our world: science, technology, mathematics, physics, chemistry, medicine, energy, architecture, transportation, communications, art, music, culture, philosophy, ethics and Morality. Without the application of intelligence in all of these areas, we might still be living in mud huts, struggling to survive the poverty of farming. Instead, we have used intelligence to improve our standard of living by about 10,000 times over the past 4,000 years.

AI presents us with an opportunity to make the various outcomes of intelligence—from creating new medicines to solving climate change to technologies that enable interstellar travel—much better in the future by deeply augmenting human intelligence.

The process of artificial intelligence augmenting human intelligence has already begun-artificial intelligence has appeared around us in various forms, such as various computer control systems, and now there are artificial intelligence large-scale language models like ChatGPT, and from now on, it will Rapid acceleration - if we allow it.

In our new age of artificial intelligence:

• Every child will have an AI tutor with infinite patience, infinite sympathy, infinite knowledge and infinite help. This AI mentor will be by each child's side as they grow, helping them reach their full potential and providing endless love.

• Everyone will have an AI assistant/coach/mentor/trainer/advisor/therapist with infinite patience, infinite compassion, infinite knowledge, and infinite help. This AI assistant will be there throughout life's opportunities and challenges, maximizing everyone's outcomes.

• Every scientist will have an AI assistant/collaborator/partner capable of greatly expanding the scope of their scientific research and achievements. So will the world of every artist, engineer, businessman, doctor and paramedic.

• Every leader—CEO, government official, nonprofit president, athletic trainer, teacher—will do the same. The amplification effect of better decision-making by leaders is enormous, so intelligence augmentation is probably the most important.

• Productivity growth across the economy will accelerate significantly, driving economic growth, the creation of new industries, the creation of new jobs, and wage growth, leading to a new era of material prosperity on Earth.

• Scientific breakthroughs, new technologies and new medicines will expand dramatically as AI helps us further decode the laws of nature and use them to our benefit.

• The creative arts will enter a golden age as AI-enhanced artists, musicians, writers and filmmakers can realize their visions faster and on a larger scale than ever before.

• I even think AI will improve warfare, when it has to, by greatly reducing wartime death rates. Every war is characterized by very limited human leaders making terrible decisions under extreme pressure and with limited information. Military commanders and political leaders will now have AI advisors to help them make better strategic and tactical decisions that minimize risk, error, and unnecessary bloodshed.

• In short, anything people do today with their natural intelligence can be done better with AI, and we will be able to take on new challenges that cannot be solved without AI, from curing all diseases to enabling interstellar travel .

• And it's not just about intelligence! Perhaps the most underestimated quality of AI is its human touch. AI art gives those who would otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend can actually improve their ability to cope with adversity. And, compared to humans, AI medical chatbots are already more empathetic. Artificial intelligence with infinite patience and compassion will make the world a warmer and friendlier place.

The stakes here are high and the opportunities are profound. AI is quite possibly the most important and best thing our civilization has ever seen, at least on par with electricity and microchips, and possibly even better than them.

It is our moral obligation to ourselves, to our children and to our future to develop and popularize artificial intelligence — and to keep away from the risks we should be afraid of.

We deserve to live in a better world with artificial intelligence, and now we can make it happen.

**So, why the panic? **

Contrary to this positive view, the current public conversation about AI is rife with panic and paranoia.

We hear all sorts of claims that AI will destroy us all, disrupt our societies, take our jobs, cause massive inequality, and enable bad actors to do terrible things.

Why this divergence in potential outcomes, from near-utopian to horribly dystopian?

**Historically, every important new technology, from the light bulb to the car, from the radio to the Internet, has sparked a moral panic—a social contagion that leads people to believe that the new technology will destroy the world, or society, or Both will perish. **The fine folks at the Archives of Pessimism have documented these decades of technology-driven moral panic episodes; their history clearly shows this pattern. As it turns out, the current panic over artificial intelligence isn't even the first.

Right now, there are indeed many new technologies that lead to undesirable outcomes—often technologies that have enormous benefits for our welfare. So, it's not that the presence of a moral panic means that there isn't anything to worry about.

But **moral panics are inherently irrational—it exaggerates what might be legitimate concerns to a level of hysteria that, ironically, makes it harder for us to confront really serious problems. **

Right now we are in a full blown moral panic about AI.

This moral panic has been exploited by multiple actors to drive policy action – pushing for new AI restrictions, regulations and laws. These actors are publicly speaking out about the dangers of artificial intelligence with extreme dramatization—they feed and further fuel moral panic—all presenting themselves as disinterested defenders of the common good.

But are they really like this?

Are they right or wrong?

AI Baptists and Moonshiners

Economists have observed a long-standing pattern among such reform movements. Actors in these movements can be divided into two categories - "Baptists" and "bootleggers", based on the historical example of Prohibition in the United States in the 1920s:

• The "Baptists" were true believer social reformers who believed deeply and emotionally (though not necessarily intellectually) that new restrictions, regulations and laws were needed to prevent social catastrophe. In the case of Prohibition, these actors were usually sincere Christians who believed that alcohol was destroying the moral fabric of society. For AI risk, these actors are people who genuinely believe that AI poses an existential risk of one kind or another — if you strap them on a lie detector, they really do.

• Bootleggers are self-interested opportunists who benefit from new restrictions, regulations, and laws as they are implemented and insulate themselves from competitors. For Prohibition, these were smugglers who made huge fortunes selling alcohol illegally at a time when legal alcohol sales were prohibited. For AI risk, these are the chief executives who could make more money if they built a sort of cartel of government-sanctioned AI suppliers, protected from startups and open-source competition. Executives - This is the software version of the "too big to fail" bank.

A cynic might argue that some ostensible Baptists are also bootleggers — especially those whose universities, think tanks, activist groups, and media outlets pay their salaries or receive grants to attack AI. If you get paid a salary or a grant to stoke an AI panic, you're probably a smuggler.

The problem with bootleggers is that they win. Baptists are naive ideologues and bootleggers are cynical operators, so the result of such reform movements is usually that bootleggers get what they want - regulatory control, protection from competition , forming a monopoly, while the Baptists were confused by the drive for social improvement.

We have just lived through a shocking example – the banking reforms following the 2008 global financial crisis. Baptists tell us we need new laws and regulations to break up "too big to fail" banks to prevent a similar crisis from happening again. So the US Congress passed the Dodd-Frank Act of 2010, which was advertised as meeting the Baptists' goals, but was actually taken advantage of by the smugglers - the big banks. The upshot is that banks that were too big to fail in 2008 are now even bigger.

So in practice, even if the Baptists were sincere, even if the Baptists were right, they would be used by cunning and greedy smugglers to benefit themselves.

This is exactly what is currently driving AI regulation.

However, it is not enough to identify actors and blame their motives. We should evaluate it against the arguments of Baptists and bootleggers.

**AI Risk 1: Will Artificial Intelligence Kill Us? **

**The first and earliest AI doomsday risk is the fear that AI will decide to kill humanity. **

Our fear that technology itself will rise up and destroy us is ingrained in our culture. The Greeks expressed this fear through the myth of Prometheus - Prometheus brought to mankind the destructive power of fire and more generally technology ("techne"), so Prometheus was immortalized by the gods tortured. Later, Mary Shelley created this mythical version of ourselves for modern humans in her novel Frankenstein, in which we develop the technology of immortality, which then rises up and tries to destroy us. Of course, essential to scare newspaper coverage of artificial intelligence is the still image of a glowing, red-eyed killer robot from James Cameron's Terminator film.

The supposed evolutionary purpose of this myth is to motivate us to seriously consider the potential risks of new technologies—after all, fire can indeed be used to burn down entire cities. But just as fire is simultaneously the foundation of modern civilization, used to keep us warm and protect us in a cold and hostile world, this myth ignores the greater advantages of most, if not all, new technologies and actually causes havoc Sexual emotion rather than rational analysis. Just because the ancients panicked like this doesn't mean we should panic too; we can use reason.

**I think the idea that AI will decide to kill humans is a profound category error. **AI is not a living being that has evolved over billions of years to participate in a struggle for the survival of the fittest, like animals and ourselves. It's math-code-computers, built, owned, used and controlled by humans. To think it will at some point develop a mind of its own and decide it has a motive that causes it to try to kill us is a wave of superstition.

In short, **AI has no will, no purpose, it doesn't want to kill you because it's not alive. AI is a machine -- no more alive than your toaster. **

Now, apparently, there are those who believe deeply in AI killers - a sudden insane increase in media coverage dousing their dire warnings, some of whom claim they've studied the subject for decades and are now horrified by what they know disturbed. Some of these true believers are even actual innovators of the technology. These people advocate all kinds of bizarre and extreme restrictions on artificial intelligence, from banning the development of artificial intelligence all the way to military air strikes on data centers and nuclear war. They argue that because people like me cannot rule out the potentially catastrophic consequences of artificial intelligence in the future, we must adopt a precautionary stance that may require substantial physical violence and death to prevent potential existential risks.

My response is that their position is unscientific - what are the testable hypotheses? What facts would prove this assumption wrong? How do we know when we've entered a danger zone? These questions go largely unanswered, other than "You can't prove it won't happen!" In fact, these Baptists' positions are so unscientific and extreme—a conspiracy theory about math and code—that they've called Physical violence, so I'm going to do something I don't usually do, question their motives.

Specifically, I think three things are going on:

First, recall John von Neumann's response to Robert Oppenheimer's concerns about his creation of nuclear weapons - which helped to end World War II and prevent World War III. "Some people admit to crimes in order to claim their crimes," he said. What's the best way to claim the importance of one's work in an exaggerated way without appearing overly boastful? This explains the inconsistency of Baptists who are actually building and funding AI — watch their actions, not their words. (Truman was even more stern after his meeting with Oppenheimer: "Don't let that crybaby in again.")

Second, some Baptists were actually bootleggers. There is an entire career called "AI safety expert", "AI ethicist", "AI risk researcher". Their job is to be doomsayers, and their statements should be treated appropriately.

Third, California is known for numerous cults, from EST to People's Temple, from Heaven's Gate to the Manson Family. Many of these, though not all, cults are harmless and even provide assistance to the alienated who find a home in them. But some are so dangerous that cults often struggle to walk the line of violence and death.

And the reality is, apparently obvious to everyone in the Bay Area, that "artificial intelligence risk" has turned into a cult, popping up in global media attention and public discussion. The cult has attracted not only some fringe figures, but also some real industry experts and quite a few wealthy donors — including, until recently, Sam Bankman-Fried. It developed a whole set of cult behaviors and beliefs.

This cult is nothing new - there is a long-standing Western tradition called Millennialism, which spawned the Apocalypse cult. The "AI Risk" cult has all the hallmarks of a millennialist apocalyptic cult. From Wikipedia, I made some additions:

"Millennialism is the belief by a group or movement [artificial intelligence risk prophets] that a fundamental shift in society is imminent [the arrival of artificial intelligence], after which everything will change [artificial intelligence utopia , dystopia, or end of the world]. Only dramatic events [banning AI, airstrikes on data centers, nuclear strikes on uncontrolled AI] are thought to change the world [stop AI], and such changes believed to be brought about or survived by a group of pious and loyal men. In most millennial episodes, an impending disaster or battle [AI apocalypse or prevention] would be followed by a new, A clean world [an AI utopia].”

This pattern of doomsday sects is so obvious that I'm surprised more people don't see it.

Don't get me wrong, sects are fun, their written material is often creative and engaging, and their members are charming at dinner parties and on TV. But their extreme beliefs should not determine the future of law and society - and clearly they should not.

**AI Risk 2: Will AI Destroy Our Society? **

A second widely circulated AI risk idea is that **AI will destroy our society by producing "harmful" outcomes (in the parlance of such doomsayers), even if we are not actually killed. **

In short: **If killer machines don't hurt us, hate speech and misinformation can. **

This is a relatively new apocalyptic focus that branched off from, and to some extent dominates, the “AI risk” movement I described above. In fact, the terminology of AI risk has recently changed from "AI safety" (a term used by those who worry that AI will actually kill us) to "AI alignment" (a term used by those who worry about societal "harms"). Frustrated by this shift, original AI safety folks, though they don't know how to take it back, now advocate renaming the actual AI risk topic "AInotkilleveryoneism" (AInotkilleveryoneism), although the term has yet to be recognized. Widely adopted, but at least clear.

The cue of AI's societal risk proposition is its own term, "AI Alignment." Align with what? human values. What human values? Oh, and this is where things get tricky.

As it happens, I have witnessed a similar situation firsthand - the social media "trust and safety" war. What is clear is that over the years, social media services have been under intense pressure from governments and activists to ban, restrict, censor and suppress content of all kinds. And concerns about "hate speech" (and its mathematical counterpart "algorithmic bias") and "misinformation" have moved directly from the context of social media into the new realm of "AI alignment."

The key lessons I've learned from the social media wars are:

On the one hand, there is no absolute free speech position. First, every country, including the United States, considers at least some content illegal. Second, there are certain types of content, such as child pornography and inciting real-world violence, that are generally considered off-limits in almost all societies — whether legal or not. Therefore, any technological platform that facilitates or generates content - speech - will have some limitations.

On the other hand, the slippery slope is not a myth, but an inevitability. Once a framework is in place to limit even the most egregious content—such as hate speech, certain hurtful terms, or misinformation, patently false claims (such as “the Pope is dead”)—various Government agencies, activist pressure groups, and non-government entities will move quickly to demand increased censorship and suppression of speech they deem a threat to society and/or their personal preferences. They will do so in ways that include flagrant crime. This cycle seems to go on forever in practice, supported by enthusiastic official monitors in our elite power structures. This has been the case in the social media space for a decade, and with some exceptions, it's only getting worse.

So now there's this dynamic forming around "AI alignment". Its advocates claim to embrace the wisdom of embracing engineered AI-generated speech and ideas that are beneficial to society, and ban AI-generated speech and ideas that are harmful to society. Its opponents claim that the Thought Police is extremely arrogant, overbearing and often flagrantly criminal, at least in the United States, and is in effect trying to become a new government-corporate-academic dictatorship of authoritative speech, coming straight to George F. Orwell's 1984.

As proponents of both "trust and security" and "AI alignment" are concentrated in a very narrow segment of the global population characterized by the US coastal elite, which includes many who work and write about the tech industry. As a result, many of my readers will find themselves conditioned to argue that massive limits on AI output are needed to avoid disrupting society. I will not try to convince you guys now, I will simply say that this is the nature of the need and that most of the world neither agrees with your ideology nor wants to see you win.

If you disagree with the narrow morality currently imposed on social media and AI through intensified norms of speech, you should also be aware that the fight over what AI is allowed to say/generate will be far more important than the fight over social media censorship. far more important. AI will most likely become the controlling layer of everything in the world. How it is allowed to function will probably matter more than anything else. You should be aware that a handful of isolated partisan social engineers are trying to decide how AI should operate now under the cover of pursuing old rhetoric that protects you.

In short, don't let the thought police suppress AI.

**AI Risk 3: Will AI take all our jobs? **

**There is a constant fear of job loss due to the replacement of human labor by machines in various forms such as mechanization, automation, computerization or artificial intelligence. **This concern has persisted for hundreds of years, ever since the advent of mechanical devices such as mechanical looms. While every new major technology has historically led to more high-paying jobs, every wave of panic has been accompanied by a “this time is different” narrative — this time it will happen, this time it’s the technology The time will finally deal a fatal blow to human labor. However, this never happened.

In the recent past, we’ve had two cycles of technology-driven unemployment panics—the outsourcing scares of the 2000s and the automation scares of the 2010s. While many in the media, pundits, and even technology industry executives kept banging on the table throughout the two decades, claiming that mass unemployment was imminent, at the end of 2019—before the outbreak of COVID—the world’s jobs Opportunities are greater than at any point in history, and wages are higher.

However, this false idea does not go away.

Sure enough, it came back.

This time, we finally have the technology that will take away all jobs and make human labor irrelevant — true artificial intelligence. Of course, history will not simply repeat itself this time, this time: artificial intelligence will cause mass unemployment rather than rapid economic growth, more jobs and higher wages. right?

No, this is certainly not going to happen, and if AI is allowed to develop and spread across the economy, it could lead to the most exciting and longest-lasting economic boom ever, with correspondingly record job creation And wage growth -- exactly the opposite of what people feared. The reason is as follows.

**Automation Kills Jobs The central mistake that theists have been making is called the "aggregate labor fallacy". **** This fallacy is that there is a fixed amount of labor that needs to be done in the economy at any given time, either by machines or by humans, and if it is done by machines then people have no work to do. **

The Aggregate Labor Fallacy arises naturally from intuition, but that intuition is wrong. **When technology is applied to production, we gain productivity growth—an increase in output produced by reducing inputs. **The result is a fall in the prices of goods and services. As the prices of goods and services fall, we pay less, which means we now have extra spending power to buy other things. This increases demand in the economy, driving new production—both new products and new industries—creating new jobs for the people displaced by machines. **The result is a bigger economy, more material prosperity, more industries, more products, more jobs. **

But the good news is more than that. We also get higher wages. This is because at the level of the individual worker, the market determines compensation based on the worker's marginal productivity. A worker in a technology-infused industry is more productive than a worker in a traditional industry. Either the employer will pay more based on the worker's increased productivity, or another employer will do so out of pure self-interest. The result is that industries that introduce technology will not only increase employment opportunities, but also increase wages.

In conclusion, advanced technology enables people to work more efficiently. This causes prices of existing goods and services to fall and wages to rise. This in turn boosts economic growth and employment growth, and stimulates the creation of new jobs and new industries. **If the market economy can function properly and technology can be introduced freely, it will be a never-ending upward cycle. As Friedman observed, "human wants and needs are inexhaustible"—we always want more than we have. A market economy that embraces technology is how we get closer to achieving everything that everyone can imagine, but never quite achieve. That's why technology won't destroy jobs, never will.

For those who haven't been exposed to these ideas, these are shocking thoughts that may take some time to understand. But I swear I'm not making them up - in fact, you can read all of them in standard economics textbooks. I recommend Henry Hazlitt's "The Curse of the Machine" chapter in Economics in One Lesson, and Frederic Bastiat's satire "The Candlemaker's Petition" which Protest the sun because the sun has unfairly competed in the lighting industry. We also have the modern version of our time.

But you might think this time is different. This time, with the advent of artificial intelligence, we have technology that can replace all human labor.

But, following the principles I described above, imagine what it would mean if all existing human labor were replaced by machines.

This would mean that economic productivity growth would take off at an absolutely astronomical rate, far exceeding any historical precedent. The prices of existing goods and services will drop to almost zero across the board. Consumer welfare will soar. Consumer spending power will soar. There will be a surge of new demand in the economy. Entrepreneurs will create a dizzying array of new industries, new products, and new services, and hire as many AI and workers as possible, as fast as possible, to meet all the new demands.

What if artificial intelligence replaces these workers again? This cycle will repeat, driving consumer welfare, economic growth and higher employment and wage growth. It would be a linear upward spiral, leading to a material utopia that Adam Smith and Karl Marx never dared to dream of.

We should be so lucky.

**AI Risk 4: Will Artificial Intelligence Lead to Severe Inequalities? **

Speaking of Karl Marx, concerns about AI taking jobs lead directly to the next claimed risk of AI, which is, well, Marc, assuming AI does take all jobs, whether for good Or out of bad. So, wouldn’t the fact that being the owner of artificial intelligence reap all the financial rewards and ordinary people get nothing would lead to huge and severe wealth inequalities?

Fittingly, this is a central Marxist thesis that owners of the means of production—the bourgeoisie—inevitably steal all of society’s wealth from those who actually work—the proletariat. No matter how many times the reality proves it wrong, the fallacy never seems to die. But let's refute it anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it and not share it—in fact, it’s the opposite, your interest is to sell it to as many customers as possible. The largest market in the world is the global market, including 8 billion people. So in reality, every new technology - even if it starts out selling to big, well-paid corporations or wealthy consumers - spreads rapidly until it falls into the hands of the largest mass market possible, eventually covering the entire planet people.

The classic example of this is Elon Musk's so-called "secret plan" in 2006 - which he released publicly of course - about Tesla's plans:

first step, build the [expensive] sports car;

The second step is to use the money earned in the first step to build an affordable car;

Step 3, use the money earned in Step 2 to build a more affordable car.

Which is of course what he did and ended up being the richest man in the world.

The last point is critical. Would Musk be richer if today he only sold cars to rich people? Won't. Would he be richer than he is now if he only built cars for himself? of course not. No, he maximizes his profits by selling to the largest possible market worldwide.

In short, everyone can have this thing, as we have seen in the past with automobiles, electricity, radios, computers, Internet, mobile phones and search engines. The companies that make these technologies are highly motivated to lower prices until everyone on the planet can afford them. This is exactly what’s already happening in AI — which is why you can use it today on free or low-cost state-of-the-art generative AI in the form of Microsoft Bing and Google Bard — and it’s going to continue to happen. Not because these suppliers are stupid or generous, but precisely because they are greedy - they want to maximize the size of the market and thus maximize profits.

So what happens is the opposite of the theory that technology drives wealth concentration — the individual user of the technology, which ultimately includes every human being on the planet, is instead empowered and captures most of the value generated. As with previous technologies, companies building AI—assuming they must operate in a free market—will race to make this a reality.

Marx was wrong then, and he is wrong now.

This is not to say that inequality is not a problem in our society. Its a problem, except that it is not driven by technology, but instead, by those sectors of the economy that are most resistant to new technologies and where government intervenes the most to prevent adoption of new technologies like artificial intelligence, specifically housing, education and healthcare. **The real risk of AI and inequality is not that AI will cause more inequality, but that we will not allow AI to be used to reduce inequality. **

**AI Risk 5: Will AI cause bad people to do bad things? **

So far, I’ve explained how four of the five most frequently raised AI risks aren’t actually real—AI won’t come alive to kill us, AI won’t destroy our society, artificial intelligence won’t Intelligence will not lead to mass unemployment, and AI will not lead to devastating increases in inequality. But now let's talk about the fifth one, and this is the one that **I actually agree with: AI will make it easier for bad guys to do bad things. **

In a sense, this is a circular argument. Technology is a tool. Starting with fire and stone, tools can be used for good—cooking food and building houses—and for evil—burning and beating. Any technology can be used for good or bad purposes. understandable. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no doubt about it.

That led some people to say, well, if that's the case, let's ban AI in this case before bad things happen. Unfortunately, AI isn't a hard-to-obtain arcane substance like plutonium. Quite the contrary, it's the most accessible material in the world - math and code.

Apparently, AI cats are out of the package. You can learn how to build artificial intelligence with thousands of free online courses, books, papers, and videos, and there are more and more outstanding open source implementations every day. AI is like air—it will be everywhere. To catch it, the level of totalitarian repression required would be so severe - a world government that monitors and controls all computers? Armed cops in black helicopters raiding rogue GPUs? — We will not have a society to protect.

So we have two very straightforward ways to deal with the risk of bad actors using AI to do bad things, and that's what we should be focusing on.

**First, we have laws in place to criminalize most uses of artificial intelligence for evil. ** Hacking into the Pentagon? That's a crime. Steal money from the bank? That's a crime. Making biological weapons? That's a crime. Carry out a terrorist attack? That's a crime. We just need to focus on preventing these crimes when we can and prosecuting them when we can't. We don't even need new laws - I don't know if there is an actual case that has been brought up for malicious use of AI that isn't already illegal. If new bad uses are discovered, we ban those uses. Certificate completed.

But you'll notice what I just said - I said we should first focus on preventing AI-assisted crime before bad things happen - doesn't that mean banning AI? Well, there is another way to prevent such behavior and that is to use AI as a defensive tool. The same AI that empowers bad people with bad goals is equally powerful in the hands of good people — especially good people tasked with preventing bad things from happening.

For example, if you're worried about artificial intelligence generating fake people and fake videos, the answer is to build new systems that allow people to authenticate themselves and real content through cryptographic signatures. The digital creation and modification of real and fake content existed before AI; the answer is not to ban word processors and Photoshop — or AI — but to use technology to build a system that actually solves the problem.

So ** the second approach is, let's use artificial intelligence aggressively for benign, legitimate and defensive purposes. Let's leverage AI in cyber defense, biodefense, tracking terrorists, and everything else we do to protect ourselves, our communities, and our nation. **

Of course, there are many smart people inside and outside of government who are already doing this kind of work - but if we put all the current efforts and intelligence focused on ineffective banning artificial intelligence into using artificial intelligence to prevent bad people from doing bad things, I believe that a world full of artificial intelligence A smart world will be more secure than the world we live in today.

The Real Risks of Not Implementing AI at Maximum Power and Speed

There is one final and real AI risk that may be the scariest of all:

AI is being exploited not only in relatively liberal Western societies, but also in China.

China has a very different vision for AI than we do. They don't even keep it a secret, and they've made it very clear that they're already pursuing their goals. And, they don't intend to confine their AI strategy to China -- they intend to do so where they provide 5G networks, offer Belt and Road loans, offer consumer-friendly apps like TikTok as their centralized control and command AI frontend , spreading it to every corner of the world.

**The biggest risk to AI is that China wins global AI dominance while we – the US and the West – don’t. **

I propose a simple strategy for dealing with this problem—in fact, this is the strategy President Ronald Reagan adopted when he won the first Cold War with the Soviet Union.

"We win, they lose."

Instead of being put on the back burner by the unfounded scares surrounding AI killers, harmful AI, job-destroying AI, and inequality-generating AI, we in the US and the West should be as fully invested in AI as possible.

We should fight the global race for technological superiority in AI and make sure China does not win.

In the process, we should introduce AI into our economies and societies as quickly and vigorously as possible to maximize its benefits to economic productivity and human potential.

This is the best way to offset the real risks of artificial intelligence and ensure that our way of life is not replaced by China's vision.

**what should we do? **

I came up with a simple plan:

• Large AI companies should be allowed to build AI as quickly and as aggressively as possible - but not achieve a regulatory monopoly, not create a government-protected cartel, and be freed from false claims about the risks of AI to compete in the market. This will maximize the technological and social rewards of the amazing capabilities of these companies, the jewels of modern capitalism.

**• Startup AI companies should be allowed to build AI as quickly and as aggressively as possible. **They should neither face the protections that big corporations receive from the government, nor should they receive government aid. They should only be allowed to compete. If startups don’t succeed, their presence in the market will also continually motivate big companies to do well — and our economy and society are winners anyway.

**• Open source AI should be allowed to spread freely and compete with large AI companies and startups. **Open source should have no regulatory barriers. Even if open source doesn't win over companies, its broad availability is a boon to students around the world who want to learn how to build and use artificial intelligence to be part of the future of technology and ensure that no matter who they are or how much money they have, AI will work for them available.

**• To counteract the risk of bad actors using AI to do bad things, governments, in partnership with the private sector, should actively engage in every aspect of potential risk areas, using AI to maximize society's defenses. **This should not be limited to AI risks, but includes more general issues such as malnutrition, disease and climate issues. AI can be an extremely powerful tool for problem solving, and we should think of it as such.

**• In order to prevent the risk of China achieving global AI dominance, we should make full use of the power of our private sector, scientific research institutions, and government to jointly promote the absolute dominance of US and Western AI on a global scale, and eventually even in the The same is true in China. We win, they lose. **

This is how we can use artificial intelligence to save the world.

It's time to act.

Legends and Heroes

I end with two simple statements.

The development of artificial intelligence began in the 1940s, at the same time as the invention of computers. The first scientific paper on neural networks—the architecture of the artificial intelligence we have today—was published in 1943. Over the past 80 years, an entire generation of AI scientists was born, went to school, worked, and in many cases died without seeing the rewards we are getting now. They are legends, every one of them.

Today, a growing number of engineers—many of whom are young and may have grandparents or even great-grandparents involved in creating the ideas behind AI—are working to bring AI to life, despite a wall of Wall of panic and pessimism for villains. I don't think they are reckless or villains. They are heroes, everyone. My company and I are more than happy to support as many of them as possible, and we will support them and their work 100%.

**"Over the years, millennialists have often [artificial intelligence risk predictors continually] attempted to predict the exact timing of such future events, often through the interpretation of various signs and precursors. However, historical predictions have almost always failed Ended up [currently there is no credible evidence that AI will kill humans]. **However, their fans [of AI risk predictors] usually try to revise explanations to align with [potential risks in the future of AI] when events occur Corresponding."

Those in the "AI risk" cult may disagree with me, and they may insist that they are rational, science-based, and that I am a brainwashed follower. Note, however, that I am not claiming that "artificial intelligence will never be a threat to humanity". I'm just pointing out that there's no evidence to date to support the "AI will kill us" thesis. Instead of indulging in cult-like panic and reactions, we should make rational assessments based on the evidence available.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 1
  • Repost
  • Share
Comment
0/400
Don’tShortInABullvip
· 2023-06-07 08:20
Only artificial intelligence can truly realize a socialist society. I thought so when I was in elementary school
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)