Archive for Programming

Are tomorrow’s engineers ready to face AI’s ethical challenges?

By Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan 

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What’s more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I’m supposed to go to?”

Another was troubled by the lack of training: “I [would be] dealing with that with no experience. … Who knows how I’ll react.”

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do receive. Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off’

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students’ understanding, but report feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research project share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

“If I’m paying money to attend ethics class as an engineer, I’m going to be furious,” one said.

These attitudes sometimes extend to how students view engineers’ role in society. One interviewee in our current study, for example, said that an engineer’s “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. Following them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. Compared to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public’s first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

About the Author:

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

How AI and a popular card game can help engineers predict catastrophic failure – by finding the absence of a pattern

By John Edward McCarthy, Arts & Sciences at Washington University in St. Louis 

Humans are very good at spotting patterns, or repeating features people can recognize. For instance, ancient Polynesians navigated across the Pacific by recognizing many patterns, from the stars’ constellations to more subtle ones such as the directions and sizes of ocean swells.

Very recently, mathematicians like me have started to study large collections of objects that have no patterns of a particular sort. How large can collections be before a specified pattern has to appear somewhere in the collection? Understanding such scenarios can have significant real-world implications: For example, what’s the smallest number of server failures that would lead to the severing of the internet?

Research from mathematician Jordan Ellenberg at the University of Wisconsin and researchers at Google’s Deep Mind have proposed a novel approach to this problem. Their work uses artificial intelligence to find large collections that don’t contain a specified pattern, which can help us understand some worst-case scenarios.

Can you find a matching set?
Cmglee/Wikimedia Commons, CC BY-SA

Patterns in the card game Set

The idea of patternless collections can be illustrated by a popular card game called Set. In this game, players lay out 12 cards, face up. Each card has a different simple picture on it. They vary in terms of number, color, shape and shading. Each of these four features can have one of three values.

Players race to look for “sets,” which are groups of three cards in which every feature is either the same or different in each card. For instance, cards with one solid red diamond, two solid green diamonds and three solid purple diamonds form a set: All three have different numbers (one, two, three), the same shading (solid), different colors (red, green, purple) and the same shape (diamond).

Marsha Falco originally created the game Set to help explain her research on population genetics.

Finding a set is usually possible – but not always. If none of the players can find a set from the 12 cards on the table, then they flip over three more cards. But they still might not be able to find a set in these 15 cards. The players continue to flip over cards, three at a time, until someone spots a set.

So what is the maximum number of cards you can lay out without forming a set?

In 1971, mathematician Giuseppe Pellegrino showed that the largest collection of cards without a set is 20. But if you chose 20 cards at random, “no set” would happen only about one in a trillion times. And finding these “no set” collections is an extremely hard problem to solve.

Finding ‘no set’ with AI

If you wanted to find the smallest collection of cards with no set, you could in principle do an exhaustive search of every possible collection of cards chosen from the deck of 81 cards. But there are an enormous number of possibilities – on the order of 1024 (that’s a “1” followed by 24 zeros). And if you increase the number of features of the cards from four to, say, eight, the complexity of the problem would overwhelm any computer doing an exhaustive search for “no set” collections.

Mathematicians love to think about computationally difficult problems like this. These complex problems, if approached in the right way, can become tractable.

It’s easier to find best-case scenarios – here, that would mean the fewest number of cards that could contain a set. But there were few known strategies that could explore bad scenarios – here, that would mean a large collection of cards that do not contain a set.

Ellenberg and his collaborators approached the bad scenario with a type of AI called large language models, or LLMs. The researchers first wrote computer programs that generate some examples of collections of many that contain no set. These collections typically have “cards” with more than four features.

Then they fed these programs to the LLM, which soon learned how to write many similar programs and choose the ones that give rise to the largest set-free collections to undergo the process again. Iterating that process by repeatedly tweaking the most successful programs enables them to find larger and larger set-free collections.

Square of nine circles, four of which are colored blue, connected by grey, red, green, and yellow lines
This is another version of a ‘no set,’ where no three components of a set are linked by a line.
Romera-Peredes et al./Nature, CC BY-SA

This method allows people to explore disordered collections – in this instance, collections of cards that contain no set – in an entirely new way. It does not guarantee that researchers will find the absolute worst-case scenario, but they will find scenarios that are much worse than a random generation would yield.

Their work can help researchers understand how events might align in a way that leads to catastrophic failure.

For example, how vulnerable is the electrical grid to a malicious attacker who destroys select substations? Suppose that a bad collection of substations is one where they don’t form a connected grid. The worst-case scenario is now a very large number of substations that, when taken all together, still don’t yield a connected grid. The amount of substations excluded from this collection make up the smallest number a malicious actor needs to destroy to deliberately disconnect the grid.

The work of Ellenberg and his collaborators demonstrates yet another way that AI is a very powerful tool. But to solve very complex problems, at least for now, it still needs human ingenuity to guide it.The Conversation

John Edward McCarthy, Professor of Mathematics, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Building fairness into AI is crucial – and hard to get right

By Ferdinando Fioretto, University of Virginia 

Artificial intelligence’s capacity to process and analyze vast amounts of data has revolutionized decision-making processes, making operations in health care, finance, criminal justice and other sectors of society more efficient and, in many instances, more effective.

With this transformative power, however, comes a significant responsibility: the need to ensure that these technologies are developed and deployed in a manner that is equitable and just. In short, AI needs to be fair.

The pursuit of fairness in AI is not merely an ethical imperative but a requirement in order to foster trust, inclusivity and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. And on top of that, my research as a computer scientist who studies AI shows that attempts to ensure fairness in AI can have unintended consequences.

Why fairness in AI matters

Fairness in AI has emerged as a critical area of focus for researchers, developers and policymakers. It transcends technical achievement, touching on ethical, social and legal dimensions of the technology.

Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives – for example, hiring algorithms – are made equitably. Socially, AI systems that embody fairness can help address and mitigate historical biases – for example, those against women and minorities – fostering inclusivity. Legally, embedding fairness in AI systems helps bring those systems into alignment with anti-discrimination laws and regulations around the world.

Unfairness can stem from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms processing data that reflects societal prejudices or lacks diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision-makers or those already in an organization. When biased data is then used to train a machine learning algorithm to aid a decision-maker, the algorithm can propagate and even amplify these biases.

Why fairness in AI is hard

Fairness is inherently subjective, influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers and policymakers often translate fairness to the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.

However, measuring fairness and building it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic parity, equality of opportunity and individual fairness.

Why the concept of algorithmic fairness is so challenging.

These definitions involve different mathematical formulations and underlying philosophies. They also often conflict, highlighting the difficulty of satisfying all fairness criteria simultaneously in practice.

In addition, fairness cannot be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment and impact.

Unintended effects on fairness

The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phases to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are seldom deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many constraints, including security and privacy.

Research my colleagues and I conducted shows that constraints such as computational resources, hardware types and privacy can significantly influence the fairness of AI systems. For instance, the need for computational efficiency can lead to simplifications that inadvertently overlook or misrepresent marginalized groups.

In our study on network pruning – a method to make complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This happens because the pruning might not consider how different groups are represented in the data and by the model, leading to biased outcomes.

Similarly, privacy-preserving techniques, while crucial, can obscure the data necessary to identify and mitigate biases or disproportionally affect the outcomes for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to unfair resource allocation because the added noise affects some groups more than others. This disproportionality can also skew decision-making processes that rely on this data, such as resource allocation for public services.

These constraints do not operate in isolation but intersect in ways that compound their impact on fairness. For instance, when privacy measures exacerbate biases in data, it can further amplify existing inequalities. This makes it important to have a comprehensive understanding and approach to both privacy and fairness for AI development.

The path forward

Making AI fair is not straightforward, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Given that bias is pervasive in society, I believe that people working in the AI field should recognize that it’s not possible to achieve perfect fairness and instead strive for continuous improvement.

This challenge requires a commitment to rigorous research, thoughtful policymaking and ethical practice. To make it work, researchers, developers and users of AI will need to ensure that considerations of fairness are woven into all aspects of the AI pipeline, from its conception through data collection and algorithm design to deployment and beyond.The Conversation

About the Author:

Ferdinando Fioretto, Assistant Professor of Computer Science, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Bringing AI up to speed – autonomous auto racing promises safer driverless cars on the road

By Madhur Behl, University of Virginia 

The excitement of auto racing comes from split-second decisions and daring passes by fearless drivers. Imagine that scene, but without the driver – the car alone, guided by the invisible hand of artificial intelligence. Can the rush of racing unfold without a driver steering the course? It turns out that it can.

Enter autonomous racing, a field that’s not just about high-speed competition but also pushing the boundaries of what autonomous vehicles can achieve and improving their safety.

Over a century ago, at the dawn of automobiles, as society shifted from horse-drawn to motor-powered vehicles, there was public doubt about the safety and reliability of the new technology. Motorsport racing was organized to showcase the technological performance and safety of these horseless carriages. Similarly, autonomous racing is the modern arena to prove the reliability of autonomous vehicle technology as driverless cars begin to hit the streets.

Autonomous racing’s high-speed trials mirror the real-world challenges that autonomous vehicles face on streets: adjusting to unexpected changes and reacting in fractions of a second. Mastering these challenges on the track, where speeds are higher and reaction times shorter, leads to safer autonomous vehicles on the road.

Autonomous race cars pass, or ‘overtake,’ others on the Las Vegas Motor Speedway track.

I am a computer science professor who studies artificial intelligence, robotics and autonomous vehicles, and I lead the Cavalier Autonomous Racing team at the University of Virginia. The team competes in the Indy Autonomous Challenge, a global contest where universities pit fully autonomous Indy race cars against each other. Since its 2021 inception, the event has drawn top international teams to prestigious circuits like the Indianapolis Motor Speedway. The field, marked by both rivalry and teamwork, shows that collective problem-solving drives advances in autonomous vehicle safety.

At the Indy Autonomous Challenge passing competition held at the 2024 Consumer Electronics Show in Las Vegas in January 2024, our Cavalier team clinched second place and hit speeds of 143 mph (230 kilometers per hour) while autonomously overtaking another race car, affirming its status as a leading American team. TUM Autonomous Motorsport from the Technical University of Munich won the event.

An autonomous race car built by the Technical University of Munich prepares to pass the University of Virginia’s entrant.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

Pint-size beginnings

The field of autonomous racing didn’t begin with race cars on professional race tracks but with miniature cars at robotics conferences. In 2015, my colleagues and I engineered a 1/10 scale autonomous race car. We transformed a remote-controlled car into a small but powerful research and educational tool, which I named F1tenth, playing on the name of the traditional Formula One, or F1, race car. The F1tenth platform is now used by over 70 institutions worldwide to construct their miniaturized autonomous racers.

The F1tenth Autonomous Racing Grand Prix is now a marquee event at robotics conferences where teams from across the planet gather, each wielding vehicles that are identical in hardware and sensors, to engage in what is essentially an intense “battle of algorithms.” Victory on the track is claimed not by raw power but by the advanced AI algorithms’ control of the cars.

These race cars are small, but the challenges to autonomous driving are sizable.

F1tenth has also emerged as an engaging and accessible gateway for students to delve into robotics research. Over the years, I’ve reached thousands of students via my courses and online lecture series, which explains the process of how to build, drive and autonomously race these vehicles.

Getting real

Today, the scope of our research has expanded significantly, advancing from small-scale models to actual autonomous Indy cars that compete at speeds of upward of 150 mph (241 kph), executing complex overtaking maneuvers with other autonomous vehicles on the racetrack. The cars are built on a modified version of the Indy NXT chassis and are outfitted with sensors and controllers to allow autonomous driving. Indy NXT race cars are used in professional racing and are slightly smaller versions of the Indy cars made famous by the Indianapolis 500.

13 people stand beside a race car in a large empty racing stadium
The Cavalier Autonomous Racing team stands behind their driverless race car.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

The gritty reality of racing these advanced machines on real racetracks pushes the boundaries of what autonomous vehicles can do. Autonomous racing takes the challenges of robotics and AI to new levels, requiring researchers to refine our understanding of how machines perceive their environment, make safe decisions and control complex maneuvers at a high speed where traditional methods begin to falter.

Precision is critical, and the margin for error in steering and acceleration is razor-thin, requiring a sophisticated grasp and exact mathematical description of the car’s movement, aerodynamics and drivetrain system. In addition, autonomous racing researchers create algorithms that use data from cameras, radar and lidar, which is like radar but with lasers instead of radio waves, to steer around competitors and safely navigate the high-speed and unpredictable racing environment.

My team has shared the world’s first open dataset for autonomous racing, inviting researchers everywhere to join in refining the algorithms that could help define the future of autonomous vehicles.

The data from the competitions is available for other researchers to use.

Crucible for autonomous vehicles

More than just a technological showcase, autonomous racing is a critical research frontier. When autonomous systems can reliably function in these extreme conditions, they inherently possess a buffer when operating in the ordinary conditions of street traffic.

Autonomous racing is a testbed where competition spurs innovation, collaboration fosters growth, and AI-controlled cars racing to the finish line chart a course toward safer autonomous vehicles.The Conversation

About the Author:

Madhur Behl, Associate Professor of Robotics and Artificial Intelligence, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why AI can’t replace air traffic controllers

By Amy Pritchett, Penn State 

After hours of routine operations, an air traffic controller gets a radio call from a small aircraft whose cockpit indicators can’t confirm that the plane’s landing gear is extended for landing. The controller arranges for the pilot to fly low by the tower so the controller can visually check the plane’s landing gear. All appears well. “It looks like your gear is down,” the controller tells the pilot.

The controller calls for the airport fire trucks to be ready just in case, and the aircraft circles back to land safely. Scenarios like this play out regularly. In the air traffic control system, everything must meet the highest levels of safety, but not everything goes according to plan.

Contrast this with the still science-fiction vision of future artificial intelligence “pilots” flying autonomous aircraft, complete with an autonomous air traffic control system handling aircraft as easily as routers shuttling data packets on the internet.

I’m an aerospace engineer who led a National Academies study ordered by Congress about air traffic controller staffing. Researchers are continually working on new technologies that automate elements of the air traffic control system, but technology can execute only those functions that are planned for during its design and so can’t modify standard procedures. As the scenario above illustrates, humans are likely to remain a necessary central component of air traffic control for a long time to come.

What air traffic controllers do

The Federal Aviation Administration’s fundamental guidance for the responsibility of air traffic controllers states: “The primary purpose of the air traffic control system is to prevent a collision involving aircraft.” Air traffic controllers are also charged with providing “a safe, orderly and expeditious flow of air traffic” and other services supporting safety, such as helping pilots avoid mountains and other hazardous terrain and hazardous weather, to the extent they can.

Air traffic controllers’ jobs vary. Tower controllers provide the local control that clears aircraft to take off and land, making sure that they are spaced safely apart. They also provide ground control, directing aircraft to taxi and notifying pilots of flight plans and potential safety concerns on that day before flight. Tower controllers are aided by some displays but mostly look outside from the towers and talk with pilots via radio. At larger airports staffed by FAA controllers, surface surveillance displays show controllers the aircraft and other vehicles on the ground on the airfield.

This FAA animation explains the three basic components of the U.S. air traffic control system.

Approach and en route controllers, on the other hand, sit in front of large displays in dark and quiet rooms. They communicate with pilots via radio. Their displays show aircraft locations on a map view with key features of the airspace boundaries and routes.

The 21 en route control centers in the U.S. manage traffic that is between and above airports and thus typically flying at higher speeds and altitudes.

Controllers at approach control facilities transition departing aircraft from local control after takeoff up and into en route airspace. They similarly take arriving aircraft from en route airspace, line them up with the landing approach and hand them off to tower controllers.

A controller at each display manages all the traffic within a sector. Sectors can vary in size from a few cubic miles, focused on sequencing aircraft landing at a busy airport, to en route sectors spanning more than 30,000 cubic miles (125,045 cubic km) where and when there are few aircraft flying. If a sector gets busy, a second and even third controller might assist, or the sector might be split into two, with another display and controller team managing the second.

How technology can help

Air traffic controllers have a stressful job and are subject to fatigue and information overload. Public concern about a growing number of close calls have put a spotlight on aging technology and staffing shortages that have led to air traffic controllers working mandatory overtime. New technologies can help alleviate those issues.

The air traffic control system is incorporating new technologies in several ways. The FAA’s NextGen air transportation system initiative is providing controllers with more – and more accurate – information.

Controllers’ displays originally showed only radar tracking. They now can tap into all the data known about each flight within the en route automation modernization system. This system integrates radar, automatic position reports from aircraft via automatic dependent surveillance-broadcast, weather reports, flight plans and flight histories.

Systems help alert controllers to potential conflicts between aircraft, or aircraft that are too close to high ground or structures, and provide suggestions to controllers to sequence aircraft into smooth traffic flows. In testimony to the U.S. Senate on Nov. 9, 2023, about airport safety, FAA Chief Operating Officer Timothy Arel said that the administration is developing or improving several air traffic control systems.

Researchers are using machine learning to analyze and predict aspects of air traffic and air traffic control, including air traffic flow between cities and air traffic controller behavior.

How technology can complicate matters

New technology can also cause profound changes to air traffic control in the form of new types of aircraft. For example, current regulations mostly limit uncrewed aircraft to fly lower than 400 feet (122 meters) above ground and away from airports. These are drones used by first responders, news organizations, surveyors, delivery services and hobbyists.

NASA and the FAA are leading the development of a traffic control system for drones and other uncrewed aircraft.

However, some emerging uncrewed aircraft companies are proposing to fly in controlled airspace. Some plan to have their aircraft fly regular flight routes and interact normally with air traffic controllers via voice radio. These include Reliable Robotics and Xwing, which are separately working to automate the Cessna Caravan, a small cargo airplane.

Others are targeting new business models, such as advanced air mobility, the concept of small, highly automated electric aircraft – electric air taxis, for example. These would require dramatically different routes and procedures for handling air traffic.

Expect the unexpected

An air traffic controller’s routine can be disrupted by an aircraft that requires special handling. This could range from an emergency to priority handling of medical flights or Air Force One. Controllers are given the responsibility and the flexibility to adapt how they manage their airspace.

The requirements for the front line of air traffic control are a poor match for AI’s capabilities. People expect air traffic to continue to be the safest complex, high-technology system ever. It achieves this standard by adhering to procedures when practical, which is something AI can do, and by adapting and exercising good judgment whenever something unplanned occurs or a new operation is implemented – a notable weakness of today’s AI.

Indeed, it is when conditions are the worst – when controllers figure out how to handle aircraft with severe problems, airport crises or widespread airspace closures due to security concerns or infrastructure failures – that controllers’ contributions to safety are the greatest.

Also, controllers don’t fly the aircraft. They communicate and interact with others to guide the aircraft, and so their responsibility is fundamentally to serve as part of a team – another notable weakness of AI.

As an engineer and designer, I’m most excited about the potential for AI to analyze the big data records of past air traffic operations in pursuit of, for example, more efficient routes of flight. However, as a pilot, I’m glad to hear a controller’s calm voice on the radio helping me land quickly and safely should I have a problem.The Conversation

About the Author:

Amy Pritchett, Professor of Aerospace Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Combining two types of molecular boron nitride could create a hybrid material used in faster, more powerful electronics

By Pulickel Ajayan, Rice University and Abhijit Biswas, Rice University 

In chemistry, structure is everything. Compounds with the same chemical formula can have different properties depending on the arrangement of the molecules they’re made of. And compounds with a different chemical formula but a similar molecular arrangement can have similar properties.

Graphene and a form of boron nitride called hexagonal boron nitride fall into the latter group. Graphene is made up of carbon atoms. Boron nitride, BN, is composed of boron and nitrogen atoms. While their chemical formulas differ, they have a similar structure – so similar that many chemists call hexagonal boron nitride “white graphene.”

Carbon-based graphene has lots of useful properties. It’s thin but strong, and it conducts heat and electricity very well, making it ideal for use in electronics.

Similarly, hexagonal boron nitride has a host of properties similar to graphene that could improve biomedical imaging and drug delivery, as well as computers, smartphones and LEDs. Researchers have studied this type of boron nitride for many years.

But, hexagonal boron nitride isn’t the only useful form this compound comes in.

As materials engineers, our research team has been investigating another type of boron nitride called cubic boron nitride. We want to know if combining the properties of hexagonal boron nitride with cubic boron nitride could open the door to even more useful applications.

Molecular structures of molecules, with atoms represented as blue spheres and bonds represented by gray lines connecting them. The left structure is in the shape of the cube, the right in flat sheets of hexagons.
Cubic boron nitride, shown on the left, and hexagonal boron nitride, shown on the right.
Oddball/Wikimedia Commons, CC BY-NC-SA

Hexagonal versus cubic

Hexagonal boron nitride is, as you might guess, boron nitride molecules arranged in the shape of a flat hexagon. It looks honeycomb-shaped, like graphene. Cubic boron nitride has a three-dimensional lattice structure and looks like a diamond at the molecular level.

H-BN is thin, soft and used in cosmetics to give them a silky texture. It doesn’t melt or degrade even under extreme heat, which also makes it useful in electronics and other applications. Some scientists predict it could be used to build a radiation shield for spacecraft.

C-BN is hard and resistant. It’s used in manufacturing to make cutting tools and drills, and it can keep its sharp edge even at high temperatures. It can also help dissipate heat in electronics.

Even though h-BN and c-BN might seem different, when put together, our research has found they hold even more potential than either on its own.

Two white powders, the top labeled 'hexagonal boron nitride' and the bottom labeled 'cubic boron nitride' with a circle between them labeled 'mixed phase boron nitride.' The bottom powder is slightly more brown and more clumpy.
The two forms of boron nitride have some similarities and some differences, but when combined, they can create a substance with a variety of scientific applications.
Abhijit Biswas

Both types of boron nitride conduct heat and can provide electrical insulation, but one, h-BN, is soft, and the other, c-BN, is hard. So, we wanted to see if they could be used together to create materials with interesting properties.

For example, combining their different behaviors could make a coating material effective for high temperature structural applications. C-BN could provide strong adhesion to a surface, while h-BN’s lubricating properties could resist wear and tear. Both together would keep the material from overheating.

Making boron nitride

This class of materials doesn’t occur naturally, so scientists must make it in the lab. In general, high-quality c-BN has been difficult to synthesize, whereas h-BN is relatively easier to make as high-quality films, using what are called vapor phase deposition methods.

In vapor phase deposition, we heat up boron and nitrogen-containing materials until they evaporate. The evaporated molecules then get deposited onto a surface, cool down, bond together and form a thin film of BN.

Our research team has worked on combining h-BN and c-BN using similar processes to vapor phase deposition, but we can also mix powders of the two together. The idea is to build a material with the right mix of h-BN and c-BN for thermal, mechanical and electronic properties that we can fine-tune.

Our team has found the composite substance made from combining both forms of BN together has a variety of potential applications. When you point a laser beam at the substance, it flashes brightly. Researchers could use this property to create display screens and improve radiation therapies in the medical field.

We’ve also found we can tailor how heat-conductive the composite material is. This means engineers could use this BN composite in machines that manage heat. The next step is trying to manufacture large plates made of a h-BN and c-BN composite. If done precisely, we can tailor the mechanical, thermal and optical properties to specific applications.

In electronics, h-BN could act as a dielectric – or insulator – alongside graphene in certain, low-power electronics. As a dielectric, h-BN would help electronics operate efficiently and keep their charge.

C-BN could work alongside diamond to create ultrawide band gap materials that allow electronic devices to work at a much higher power. Diamond and c-BN both conduct heat well, and together they could help cool down these high-power devices, which generate lots of extra heat.

H-BN and c-BN separately could lead to electronics that perform exceptionally well in different contexts – together, they have a host of potential applications, as well.

Our BN composite could improve heat spreaders and insulators, and it could work in energy storage machines like supercapacitors, which are fast-charging energy storage devices, and rechargeable batteries.

We’ll continue studying BN’s properties, and how we can use it in lubricants, coatings and wear-resistant surfaces. Developing ways to scale up production will be key for exploring its applications, from materials science to electronics and even environmental science.The Conversation

About the Author:

Pulickel Ajayan, Professor of Materials Science and NanoEngineering, Rice University and Abhijit Biswas, Research Scientist in Materials Science and Nanoengineering, Rice University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

From besting Tetris AI to epic speedruns – inside gaming’s most thrilling feats

By James Dawes, Macalester College 

After 13-year-old Willis Gibson became the first human to beat the original Nintendo version of Tetris, he dedicated his special win to his father, who passed away in December 2023.

The Oklahoma teen beat the game by defeating level after level until he reached the “kill screen” – that is, the moment when the Tetris artificial intelligence taps out in exhaustion, stopping play because its designers never wrote the code to advance further. Before Gibson, the only other player to overcome the game’s AI was another AI.

For any parent who has despaired over their children sinking countless hours into video games, Gibson’s victory over the cruel geometry of Tetris stands as a bracing corrective.

Despite the stereotypes, most gamers are anything but lazy. And they’re anything but mindless.

The world’s top players can sometimes serve as reminders of the best in us, with memorable achievements that range from the heroic to the inscrutably weird.

The perfect run

Speedrunning” is a popular gaming subculture in which players meticulously optimize routes and exploit glitches to complete, in a matter of minutes, games that normally take hours, from the tightly constrained, run-and-gun action game Cuphead to the sprawling role-playing epic Baldur’s Gate 3.

In top-level competition, speedrunners strive to match the time of what’s referred to as a “TAS,” or “tool-assisted speed run.” To figure out the TAS time, players use game emulators to choreograph a theoretically perfect playthrough, advancing the game one frame at a time to determine the fastest possible time.

Success requires punishing precision, flawless execution and years of training.

The major speedrunning milestones are, like Olympic races, marked by mere fractions of a second. The urge to speedrun likely sprouts from an innate human longing for perfection – and a uniquely 21st century compulsion to best the robots.

A Twitch streamer who goes by the username Niftski is currently the human who has come closest to achieving this androidlike perfection. His 4-minute, 54.631-second world-record speedrun of Super Mario Bros. – achieved in September 2023 – is just 0.35 seconds shy of a flawless TAS.

Watching Niftski’s now-famous run is a dissonant experience. Goofy, retro, 8-bit Mario jumps imperturbably over goombas and koopa troopas with the iconic, cheerful “boink” sound of his hop.

Meanwhile, Niftski pants as his anxiety builds, his heart rate – tracked on screen during the livestream – peaking at 188 beats per minute.

When Mario bounces over the final big turtle at the finish line – “boink” – Niftski erupts into screams of shock and repeated cries of “Oh my God!”

He hyperventilates, struggles for oxygen and finally sobs from exhaustion and joy.

Twitch streamer Niftski’s record speedrun of Super Mario Bros. missed perfection by 0.35 seconds.

The largest world and its longest pig ride

This list couldn’t be complete without an achievement from Minecraft, the revolutionary video game that has become the second-best-selling title in history, with over 300 million copies sold – second only to Tetris’ 520 million units.

Minecraft populates the video game libraries of grade-schoolers and has been used as an educational tool in university classrooms. Even the British Museum has held an exhibition devoted to the game.

Minecraft is known as a sandbox game, which means that gamers can create and explore their own virtual worlds, limited only by their imagination and a few simple tools and resources – like buckets and sand, or, in the case of Minecraft, pickaxes and stone.

So what can you do in the Minecraft playground?

Well, you can ride on a pig. The Guinness Book of World Records marks the farthest distance at 414 miles. Or you can collect sunflowers. The world record for that is 89 in one minute. Or you can dig a tunnel – but you’ll need to make it 100,001 blocks long to edge out the current record.

My personal favorite is a collective, ongoing effort: a sprawling, global collaboration to recreate the world on a 1:1 scale using Minecraft blocks, with blocks counting as one cubic meter.

At their best, sandbox games like Minecraft can bring people closer to the joyful and healthily pointless play of childhood – a restorative escape from the anxious, utility-driven planning that dominates so much of adulthood.

Popular YouTuber MrBeast contributes to ‘Build the Earth’ by constructing a Minecraft replica of Raleigh, N.C.

The galaxy’s greatest collaboration

The Halo 3 gaming community participated in a bloodier version of the collective effort of Minecraft players.

The game, which pits humans against an alien alliance known as the Covenant, was released in 2007 to much fanfare.

Whether they were playing the single-player campaign mode or the online multiplayer mode, gamers around the world started seeing themselves as imaginary participants in a global cause to save humanity – in what came to be known as the “Great War.”

They organized round-the-clock campaign shifts, while sharing strategies in nearly 6,000 Halo wiki articles and 21 million online discussion posts.

Halo developer Bungie started tracking total alien deaths by all players, with the 10 billion milestone reached in April 2009.

Game designer Jane McGonigal recalls with awe the community effort that went into that Great War, citing it as a transcendent example of the fundamental human desire to work together and to become a part of something bigger than the self.

Bungie maintained a collective history of the Great War in the form of “personal service records” that memorialized each player’s contributions – medals, battle statistics, campaign maps and more.

The archive beggars comprehension: According to Bungie, its servers handled 1.4 petabytes of data requests by players in one nine-month stretch. McGonigal notes, by way of comparison, that everything ever written by humans in all of recorded history amounts to 50 petabytes of data.

Gamification versus gameful design

If you’re mystified by the behavior of these gamers, you’re not alone.

Over the past decade, researchers across a range of fields have marveled at the dedication of gamers like Gibson and Niftski, who commit themselves without complaint to what some might see as punishing, pointless and physically grueling labor.

How could this level of dedication be applied to more “productive” endeavors, they wondered, like education, taxes or exercise?

From this research, an industry centered on the “gamification” of work, life and learning emerged. It giddily promised to change people’s behaviors through the use of extrinsic motivators borrowed from the gaming community: badges, achievements, community scorekeeping.

The concept caught fire, spreading everywhere from early childhood education to the fast-food industry.

Many game designers have reacted to this trend like Robert Oppenheimer at the close of the eponymous movie – aghast that their beautiful work was used, for instance, to pressure Disneyland Resort laborers to load laundry and press linens at anxiously hectic speeds.

Arguing that the gamification trend misses entirely the magic of gaming, game designers have instead started promoting the concept of “gameful design.” Where gamification focuses on useful outcomes, gameful design focuses on fulfilling experiences.

Gameful design prioritizes intrinsic motivation over extrinsic incentives. It embraces design elements that promote social connection, creativity, a sense of autonomy – and, ultimately, the sheer joy of mastery.

When I think of Niftski’s meltdown after his record speedrun – and Gibson’s, who also began hyperventilating in shock and almost passed out – I think of my own children.

I wish for them such moments of ecstatic, prideful accomplishment in a world that sometimes seems starved of joy.The Conversation

About the Author:

James Dawes, Professor of English, Macalester College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024

By Anjana Susarla, Michigan State University; Casey Fiesler, University of Colorado Boulder, and Kentaro Toyama, University of Michigan 

2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.

We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.


Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.

I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.

Many of the challenges in the year ahead have to do with problems of AI that society is already facing.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.

Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.


Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.The Conversation

About the Authors:

Anjana Susarla, Professor of Information Systems, Michigan State University; Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder, and Kentaro Toyama, Professor of Community Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers

By Daniel Lidar, University of Southern California 

Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.

The source of quantum computing’s power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves – like sound waves or ocean waves – combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

The ones and zeros – and everything in between – of quantum computing.

Applications of quantum computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging and surveillance.

Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks – including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage – in particular in machine learning – remains a critical area of ongoing research.

a metal apparatus with green laser light in the background
A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves.
Guoqing Wang, CC BY-NC-ND

Staying coherent and overcoming errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Quantum advantage coming into view

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.The Conversation

About the Author:

Daniel Lidar, Professor of Electrical Engineering, Chemistry, and Physics & Astronomy, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Amazon’s AI move – why you need AI investments as race speeds up

By George Prior

Amazon’s $4bn investment into a ChatGPT rival reinforces why almost all investors should have some artificial intelligence (AI) exposure in their investment mix, says the CEO of one of the world’s largest independent financial advisory, asset management and fintech organizations.

The comments from Nigel Green of deVere Group comes as e-commerce giant Amazon said on Monday it will invest $4 billion in Anthropic and take a minority ownership position.  Anthropic was founded by former OpenAI (the company behind ChatGPT) executives, and recently debuted its new AI chatbot named Claude 2.

He says: “This move highlights how the big tech titan is stepping up its rivalry with other giants Microsoft, Google and Nvidia in the AI space.

“The AI Race is on, with the big tech firms racing to lead in the development, deployment, and utilisation of artificial intelligence technologies.

“AI is going to reshape whole industries and fuel innovation – and this makes it crucial for investors to pay attention and why almost all investors need exposure to AI investments in their portfolios.”

While it seems that the AI hype is everywhere now, we are still very early in the AI era.  Investors, says the deVere CEO, should act now to have the ‘early advantage’.

“Getting in early allows investors to establish a competitive advantage over latecomers. They can secure favourable entry points and lower purchase prices, maximizing their potential profits.

“This tech has the potential to disrupt existing industries or create entirely new ones. Early investors are likely to benefit from the exponential growth that often accompanies the adoption of such technologies. As these innovations gain traction, their valuations could skyrocket, resulting in significant returns on investment,” he notes.

While AI is The Big Story currently, investors should, as always, remain diversified across asset classes, sectors and regions in order to maximise returns per unit of risk (volatility) incurred.

Diversification remains investors’ best tool for long-term financial success. As a strategy it has been proven to reduce risk, smooth-out volatility, exploit differing market conditions, maximise long-term returns and protect against unforeseen external events.

Of the latest Amazon investment, Nigel Green concludes: “AI is not just another technology trend; it is a game-changer. Investors need to pay attention and include it as part of their mix.”

About:

deVere Group is one of the world’s largest independent advisors of specialist global financial solutions to international, local mass affluent, and high-net-worth clients.  It has a network of offices across the world, over 80,000 clients and $12bn under advisement.