We will find out whether it is the torture kind or not next year.
It has been a little more than a year since it first appeared on the scene, and OpenAI’s ChatGPT programme is somehow even more widespread than it was in February. On the other hand, our capacity to manage generative artificial intelligence and ameliorate the numerous problems it causes in the actual world continues to lag far behind when compared to the current state of the art in technology. In light of this, the year 2024 has the potential to be a defining moment for generative artificial intelligence in particular and machine learning in general. Will artificial intelligence continue to demonstrate that it is a major revolution in human-computer communication, comparable to the advent of the mouse in 1963? Or, are we instead moving in the direction of yet another technical dead end that has been overhyped, such as 3D televisions? In this article, we will examine the impact that OpenAI and its chatbot have had on consumer electronics in the year 2023, as well as the potential areas in which they could lead the market in the coming year.
When everything is taken into consideration, OpenAI had a fantastic year. The term “Meteoric” does not do honour to OpenAI’s rise this year. On November 30, 2022, the firm made ChatGPT available to the public. Within the span of just five days, the programme had surpassed one million users; by the beginning of January, one hundred million people were coming on to utilise it each month. A period of four and a half years was required for Facebook to achieve the level of involvement of that magnitude. ChatGPT became the most rapidly adopted programme in the history of the internet in 2023, surpassing both TikTok and Instagram in terms of its rate of adoption. The question of whether OpenAI will be able to maintain its position at the vanguard of the generative artificial intelligence business by the year 2024, when billions of dollars are being put into the research and development coffers of its competitors, remains unanswered. OpenAI is supported financially by Microsoft to the tune of billions of dollars.
The unexpected success of the company this year has brought its CEO, Sam Altman, into the spotlight in the media. Altman, who is 38 years old and recently served as the head of Y-Combinator, is basking in a significant portion of the adulation that was previously bestowed upon Elon Musk. Altman was everywhere for a period of time, appearing before Congressional committees on multiple occasions and attending the AI Safety Summits held by the Senate. In addition to that, he went on a world tour that took him to sixteen different cities, including Israel, India, Japan, Nigeria, South Korea, and the United Arab Emirates, in order to assist in promoting ChatGPT to developers and policy makers.
i’m doing a trip in may/june to talk to openai users and developers (and people interested in AI generally). please come hang out and share feature requests and other feedback!
— Sam Altman (@sama) March 29, 2023
more detail here: https://t.co/lp9WkI811R or email [email protected]
It turned out that even his dismissal from OpenAI’s board of directors in November was a favourable development in the grand scheme of things. Altman was fired on a Friday, which sparked a 72-hour period of panic in Silicon Valley. During this time, multiple OpenAI leaders resigned in solidarity, approximately 95 percent of rank and file staff threatened to walk out without his reinstatement, two interim CEOs were installed and removed in the span of just a few days, and ultimately, Microsoft intervened in a roundabout way. Ultimately, Altman continues to serve as the Chief Executive Officer of OpenAI, despite the fact that the board of directors has become more cooperative and accommodating. Furthermore, there is a tacit understanding among the industry that if you remove him from his position, Sam Altman will become more powerful than you could possible think.
The competition that OpenAI faced had a difficult time keeping up with the pace.
Being the first artificial intelligence product of its kind to be released on the market was a crucial factor that contributed to ChatGPT’s quick and overwhelming success. Despite the fact that the general public has already become accustomed to more basic machine learning tasks such as language translation, OpenAI was the first company to develop a generative artificial intelligence programme that could have a conversational conversation with its user. Image generators such as DALL-E and Midjourney were already popular forms of entertainment. Even digital giants like Google and Amazon, which have enormous research and development resources, were taken off guard by such demand and were sluggish to respond with rival products of their own. This novelty proved to be an invaluable advantage.
This year, Google was the most egregious example of a company that imitated other companies. In the wake of the launch of ChatGPT, Google devoted the majority of their I/O Developers Conference in March to introducing a plethora of brand new generative artificial intelligence models and platforms. Among these was the introduction of the Google Bard chatbot. Bard was Google’s response to ChatGPT; nevertheless, it was not a product that was particularly trustworthy to begin with. Even before it was made available to the general public, Bard made a first impression that was embarrassing when, in February, it boldly quoted inaccurate information about the James Webb Space Telescope in an advertisement that was posted on Twitter.
Over the course of the year, Google gradually added new features, capabilities, and access to Bard. In December, the company finally decided to move the entire platform to its newly announced core model, Gemini. Gemini had been marketed as Google’s “most capable and general model” that had been constructed to that point. After that, Google was instantly caught in the act of misrepresenting the capabilities of the system as it was being demonstrated in a video. Gemini’s demonstration did little to sway critics of Google’s awkward and rushed response to ChatGPT, even if the company did not once again get caught in a falsehood that could be easily disproved.
Indeed, Gemini was able to outperform ChatGPT in the bulk of the industry’s typical performance metrics, as was pointed out in a recent opinion piece published by Bloomberg. On the other hand, Google earned its scores by utilising the Gemini Ultra model, which has not yet been given to the public. The model only managed to surpass the GPT-4 by extremely slim margins. GPT-4 was released about a year ago, and Google’s finest effort barely surpassed it in terms of the mathematics exercises that were appropriate for middle school students. This is not a very good performance from a company that boasts research spending that are comparable to the gross domestic product of smaller countries.
Bing is doing quite well, and I appreciate you inquiring about it. As a result of Microsoft’s decision to invest ten billion dollars in OpenAI in January as part of an ongoing relationship that will last for many years, Bing, along with nearly everything else in the Microsoft ecosystem, is now being enhanced with algorithmic intelligence. If there was one firm that had a higher performance in 2023 than OpenAI, it would be Microsoft. According to reports, Microsoft is going to earn seventy-five percent of all profits made by OpenAI until those billions of dollars that were spent are recovered.
Anthropic’s Claude LLM was the recipient of Amazon’s $4 billion bet on generative artificial intelligence. In 2023, Amazon made great progress in exploiting the technology for use throughout its expansive empire, including the Echo Frames smart glasses, Alexa with Generative AI, and NFL Thursday Night Games. The company has introduced its Bedrock foundational model platform, which will provide AI-generated text and images as a cloud service. Additionally, the company has launched a series of free AI Ready developer courses and an accelerator programme to fund genAI startups. Additionally, the company has introduced generative tools for filling backgrounds and product listings, and it has now introduced a standalone image generator AI to compete with DALL-E.
Andy Jassy, the CEO of Amazon, stated on the company’s earnings call for the second quarter in August that “everyone of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience.” We are optimistic that the majority of these applications will be built on [Amazon Web Services], despite the fact that we will be responsible for the development of a few of them ourselves. However, the majority of these applications will be developed by other businesses. Keep in mind that data is the foundation of artificial intelligence. The goal is to bring generative artificial intelligence models to the data, rather than the other way around.
We are not yet prepared for the age of artificial intelligence.
Even when it is not being used for plainly evil goals such as scamming the elderly and amplifying political misinformation, generative artificial intelligence technology has proven to be extremely disruptive to a wide range of businesses and institutions, including education and healthcare, as well as manufacturing and logistics. Despite the fact that it has been promoted as a potential substitute for people in a wide variety of professions, including medical imaging, computer programming, accountancy, journalism, and digital visual arts, many employment opportunities have been eliminated as a result of its implementation.
A number of labour unions, including the Writers Guild of America and the Screen Actors Guild, went on strike this year. The primary reason for these strikes was to prevent their works and likenesses from being utilised to train future AI models. However, independent artists, whose intellectual property has been ruthlessly scraped by disreputable corporations for model training (looking at you, Stability AI), have had much less success in securing their creations. As a result, some creators have taken responses that are both extreme and dangerous.
Data privacy has emerged as a contentious issue for artificial intelligence enterprises in the year 2023. It was discovered in March that a flaw in ChatGPT had been leaking chat history titles, which could have also included personal payment information. When a group of three employees from Samsung utilised ChatGPT to summarise the events that took place at a business meeting in April, they unintentionally disclosed confidential information about the company. It was around the same time that it was found that Google had been unwittingly leaking users’ Bard chats into its general search results that Microsoft AI researchers made the mistake of uploading 38 terabytes of corporate data to an open access Azure online folder in September. Researchers in the field of information security discovered, as recently as November, that even “silly” assaults, such as instructing ChatGPT to repeat the word “poem” indefinitely, might deceive the system into divulging personally identifying information.
The institutional response to these developing challenges was tepid at the beginning of the year, with the majority of school districts, government organisations, and Fortune 500 firms prohibiting the use of chatbot artificial intelligences by their employees (and pupils). Due to the difficulty in actually implementing these first efforts, they were found to be mainly ineffectual. There is an expectation that the regulatory initiatives of the federal government will have a great deal more teeth.
As part of its administration, the Biden White House has made artificial intelligence (AI) regulation a primary focus. This has been accomplished through a variety of initiatives, including the creation of a “blueprint” for its AI Bill of Rights in October of last year, the investment of millions of dollars into new AI research and development centres for the National Science Foundation, the wringing of development guardrail concessions from leading AI companies, and the launch of an AI Cyber Challenge. In October, the President issued a comprehensive executive order that established extensive protections and best practices involving user privacy, government openness, and public safety in future artificial intelligence development by federal contractors. This step was the most ambitious action taken by the administration. Both the United States Senate and the United States House of Representatives have been quite active this year. They have organised legislative hearings on government supervision standards for the artificial intelligence business, hosted two AI Safety Summits, and drafted legislation (which has not yet been put to a vote).
Looking ahead to OpenAI’s 2024 and beyond what lies ahead
As we move into the new year, OpenAI is the one that has the lead to lose. All opposition voices on the board that called for prudence have been silenced, and the company is ready to further expand its operations in 2024 as the technology continues to improve on a worldwide scale. CEO Sam Altman retains a greater influence over the company than he ever has before. In the coming year, I anticipate that OpenAI’s rivals will demonstrate a more impressive performance. Google, Meta, and Amazon are all investing freely on artificial intelligence research in an effort to catch up to and surpass the GPT platform.
And despite the fact that the entire ChatGPT frenzy began with individual users, Paul Silverglate, vice chair of Deloitte LLP, believes that enterprise applications will be the source of the greatest growth in 2024. “You can anticipate the incorporation of generative artificial intelligence into enterprise software, which will provide a greater number of knowledge workers with the tools they require to work with greater efficiency and to make better decisions,” he stated in a recent release.
According to a recent study conducted by McKinsey & Company, the current generation of conversational artificial intelligence systems “have the potential to automate work activities that absorb 60 to 70 percent of employees’ time.” This is a result of rapid advancements in natural language processing technology, which has led to the possibility that “half of today’s work activities” could be automated away from human hands “between 2030 and 2060.” A decade earlier than what was previously projected, that is the case.