Since we are at the beginning of a brand-new year, let’s look forward and embrace some of the opportunities and possibilities that may come our way. 2023 has been a storm of advancements, propelling artificial intelligence to new heights, from dynamic open-source contributions and new generative AI models to cutting-edge Large Language models, setting the stage for a transformative future.
It’s hard to deny that the influence of artificial intelligence will not persist. 2023 set in motion a chain of events in the space of technology that, can be described as groundbreaking, and to be honest, just a little bit frightening concerning the unprecedented explosive growth of AI. Unsurprisingly, with growth comes new challenges, and the ones surrounding AI, crossing from bias to copyright concerns, will undoubtedly steer the focus of researchers, regulators, and the wider public not just in the upcoming year but for the future.
Yet, amidst the tech world’s fascination with AI, there are perceptible shifts in attitudes marked by a more discerning and mature approach. Organizations are transitioning from experimentation to pragmatic real-world applications. This year’s trends underscore a refined and cautious approach to AI development and deployment, emphasizing ethics, safety, and an evolving regulatory framework.
Although, at the moment, we can only speculate, one thing is for certain — 2024 promises to unravel the transformative potential of AI technology.
Say hello to the rise of personalized chatbots! This year, tech giants heavily invested in generative AI are moving full steam ahead to demonstrate how their products can translate into profit. Google and OpenAI are leading the charge by focusing on a ‘small is powerful’ approach, unveiling user-friendly platforms. These platforms allow people, regardless of their coding skills, to create their own customized mini chatbots using robust language models.
In 2024, generative AI might finally become a reliable day-to-day companion accessible to the everyday individual. Anticipate witnessing a surge in people tinkering with numerous AI models. Cutting-edge AI models like GPT-4 and Gemini now possess multimodal capabilities, processing not only text input and output but also images and videos. This breakthrough could unlock a plethora of innovative applications. For example, an online retailer can leverage an AI model to generate text descriptions for his catalog of products alongside images and videos of the products in question with just a click of a button.
However, the true success of this endeavor depends on the reliability of these models. Language models often have the bad habit of fabricating content, and biases permeate their results. Furthermore, these models remain susceptible to hacking, particularly when allowed to browse the web. As the initial excitement disappears, tech companies will need to offer solutions to these issues to maintain customer trust and satisfaction.
Generative AI is entering its second wave and this time, the focus is on video. The initial explosion of generative models in 2022 brought forth photorealistic images, quickly becoming commonplace. This new branch of technology is already reshaping filmmaking, from seamlessly lip-syncing actors in multiple languages to creating groundbreaking special effects, as demonstrated in movies like “Indiana Jones and the Dial of Destiny,” featuring a de-aged deepfake Harrison Ford. Other high-profile actors who received the AI de-aging treatment include Samuel L. Jackson, Robert De Niro, and Robert Downey Jr.
Beyond Hollywood, generative AI’s impact extends to marketing and training. AI tools can transform a single actor’s performance into an array of deepfake avatars catering to the needs of companies and businesses. While this capability opens new possibilities, especially in content creation, it also raises ethical questions, particularly concerning the role of actors and the potential misuse of AI by studios.
Overall, the transformative nature of generative AI in filmmaking signifies a fundamental shift in the craft of filmmaking, prompting discussions about the responsible and ethical application of this technology. As generative AI ventures into video, its societal implications and creative potential are only beginning to unfold.
Multimodal AI is revolutionizing data processing by integrating various input types, such as text, images, and sound, mimicking the human ability to process diverse sensory information. OpenAI’s GPT-4 model exemplifies multimodal capabilities, allowing it to respond to both visual and audio input. This innovation enables dynamic interactions, such as taking photos of a fridge’s contents and asking ChatGPT to suggest a recipe based on the visual input, with the potential to include an audio element.
While current generative AI initiatives are predominantly text-based, the true potential lies in merging text, conversation, images, and video, offering versatile applications across businesses. Multimodal AI finds diverse real-world applications, including healthcare, where it can enhance diagnostic accuracy by analyzing medical images in conjunction with patient history and genetic information. At a broader job function level, multimodal models empower individuals without formal backgrounds in design or coding, expanding their capabilities.
Introducing multimodal capabilities not only enhances AI models’ strength but also provides them with new data sources for learning beyond language limitations. As models advance, tapping into raw inputs from the world, such as video or audio data, becomes crucial for more advanced AI models to perceive and draw conclusions independently. The multifaceted applications of multimodal AI signify a transformative shift in how AI interacts with and learns from the world.
Agentic AI signifies a pivotal shift in AI capabilities, transitioning from reactive to proactive systems. Unlike traditional ones, AI Agents possess autonomy, proactivity, and the ability to act independently. These advanced agents understand their environment, set goals, and take action to achieve objectives without direct human intervention. For instance, in environmental monitoring, an AI agent could autonomously collect data, analyze patterns, and initiate preventive actions in response to hazards, such as detecting early signs of a forest fire. Similarly, a financial AI Agent could actively manage investment portfolios, adapting strategies in real time based on changing market conditions.
2024 will likely become the year where AI Agents go beyond basic conversation to actively accomplish tasks for users, such as making reservations, planning trips, and connecting with other services. This shift towards more task-oriented AI reflects the maturation of AI capabilities beyond chat-based interactions. Moreover, the combination of agentic and multimodal AI introduces new domains of possibility. For instance, the integration of multimodal capabilities with agentic models enables the development of computer vision applications through natural language prompting, simplifying processes that traditionally required complex training and deployment of image recognition models.
In essence, agentic AI, coupled with multimodal features, not only empowers AI Agents to proactively engage with their environment but also extends the accessibility of AI applications by streamlining development processes through natural language interfaces.
Open-source AI offers a cost-effective and expansive avenue for developing Large Language Models and generative AI systems. The availability of open-source AI, often free to the public, facilitates collaboration and allows developers to build upon existing code, reducing costs and broadening access to AI technologies. GitHub data from the past year indicates a significant surge in developer engagement with generative AI projects, with these projects entering the top 10 most popular on the platform.
Throughout 2023, the open-source landscape for generative models evolved with projects like Meta’s Llama 2. The spread of open-source AI projects can reshape the tech landscape in 2024, democratizing access to sophisticated AI models for smaller entities with limited resources. While open-source AI encourages transparency and ethical development by allowing scrutiny for biases, bugs, and security vulnerabilities, concerns linger about potential misuse for creating harmful content. Even so, the complexity and resource-intensive nature of maintaining open-source AI models pose challenges, even for traditional software development.
The rise of open-source AI reflects a democratization of AI access, fostering collaboration, transparency, and ethical considerations, albeit with concerns about misuse and the challenges of maintaining complex models.
The emergence of Shadow AI, the clandestine use of AI within organizations without formal approval, is on the rise as AI becomes more accessible to a broader range of employees. This trend often involves the rapid adoption of easy-to-use AI tools, such as chatbots, without undergoing IT review processes. While indicative of an innovative spirit, Shadow AI poses risks related to security, data privacy, and compliance, as end-users may unknowingly expose sensitive information.
In 2024, organizations must address Shadow AI through governance frameworks that balance fostering innovation with safeguarding privacy and security. This entails establishing clear AI usage policies, endorsing approved platforms, and fostering collaboration between IT and business leaders to understand various departments’ AI needs. Despite the risks, a recent EY survey found that 90% of respondents are already using AI at work, highlighting the necessity for aligning employees with ethical and responsible AI use.
Additionally, the rise of hallucinations — false outputs generated by AI models adds another layer of concern. The potential for these hallucinations raises the demand for AI risk hallucination insurance, an emerging market that is expected to grow rapidly in 2024, reflecting the increasing awareness of the potential risks associated with widespread AI use.
A spike in AI disinformation
AI-generated election disinformation and deepfakes are poised to become real issues in the 2024 elections, with politicians worldwide leveraging these tools to manipulate public opinion. Notable instances, like candidates in Argentina using AI-generated content to attack opponents and deepfakes spreading false narratives during Slovakia’s elections, underscore the growing threat. Even in the U.S., figures like Donald Trump endorse groups using AI to create memes with discriminatory themes.
The ease of creating deepfakes with generative AI has democratized disinformation, making it challenging to discern genuine content online, especially in a politically charged atmosphere. This rising trend raises concerns about its potential impact on election outcomes and the overall erosion of trust in digital information.
As generative AI continues to advance, the difficulty in distinguishing between authentic and manipulated content amplifies. The accessibility of AI-generated images poses a serious challenge for reliable information sources. With techniques to track and counter such content still in the early stages, initiatives like watermarks are voluntary and imperfect. Additionally, social media platforms’ inactive response to misinformation adds to the urgency of addressing this issue. The upcoming year is critical for those combating the proliferation of AI-generated fake news, setting the stage for a real-time experiment in countering this growing threat.
Growing concerns surround AI ethics and security risks, particularly as deepfakes and advanced AI-generated content raise alarms about potential misinformation, manipulation, and identity theft. The enhanced efficacy of ransomware and phishing attacks, facilitated by AI, adds to the urgency of addressing these challenges. Detecting AI-generated content remains challenging, with existing watermarking techniques easily avoided and detection software prone to false positives.
The increased omnipresence of AI systems underscores the need for transparency and fairness, emphasizing careful vetting of training data and algorithms for biases. Ensuring ethics and compliance considerations are integral to the AI development process is crucial for effective planning and regulatory alignment. As enterprises implement AI, considerations for controls and ethical frameworks should be concurrent, avoiding post-experimentation realizations about the need for safeguards.
Moreover, prioritizing safety and ethics supports the exploration of smaller, more domain-specific models that are inherently less capable than their larger counterparts. This intentional limitation reduces the likelihood of undesirable outputs, aligning with a proactive approach to AI safety.
In 2024, AI regulation is entering a pivotal phase with evolving laws, policies, and frameworks globally. The EU’s groundbreaking AI Act, recently provisionally agreed upon, stands as the world’s first comprehensive AI law. If adopted, it would ban specific AI uses, impose obligations on developers of high-risk systems, and demand transparency from generative AI users, with potential multimillion-dollar fines for noncompliance. GDPR, in conjunction with the AI Act, might play a substantial role, especially concerning issues like the right to be forgotten in the context of Large Language Models learning from massive datasets.
The EU’s advancements position it as a potential global AI regulator, ahead of the U.S. in terms of AI regulatory perspectives. While the U.S. lacks comprehensive federal legislation akin to the EU’s AI Act, businesses are urged not to procrastinate compliance considerations. Recent U.S. executive branch activities, including President Biden’s October executive order, introduce mandates like sharing safety test results and restrictions to mitigate AI risks. However, the potential impact of the upcoming U.S. election year introduces uncertainties about the future approach to AI oversight, depending on the stance of the incoming administration.
As 2024 unfolds, organizations must stay informed and adaptable to navigate the shifting landscape of AI regulations, which could significantly impact global operations and AI development strategies.
A rising demand for AI talent
The demand for AI and machine learning talent is expected to intensify in 2024, reflecting the ongoing challenges in designing, training, and maintaining machine learning models. As AI becomes integral to business operations, there is a rising need for professionals who can bridge theoretical knowledge with practical implementation.
According to an O’Reilly report, the most sought-after skills for generative AI projects include AI programming, data analysis and statistics, and operations for AI and machine learning. However, these skills remain in short supply, posing a challenge for the AI industry.
In response to the talent scarcity, organizations are anticipated to actively seek individuals with expertise in deploying, monitoring, and maintaining AI systems in real-world contexts. This trend is not limited to major tech companies but extends across industries as IT and data become ubiquitous business functions, aligning with the growing popularity of AI initiatives.
The focus on building internal AI and machine learning capabilities is seen as a crucial phase in the broader digital transformation landscape. Additionally, there’s a growing recognition of the importance of diversity in AI initiatives, spanning from technical teams constructing models to board-level decisions. This emphasis on diversity aims to address concerns related to biases in training data, fostering a more inclusive and ethical development of AI technologies.
✅As the curtain rises in 2024, the stage is set for a transformative AI renaissance. The groundbreaking advancements of 2023, from the rise of customizable chatbots to the ethical considerations of generative AI in video transformation, have laid the foundation for an era where AI seamlessly integrates into our daily lives. This year marks a pivotal shift from minimal experimentation to pragmatic, real-world applications.
✅In 2024, the world is not merely witnessing the evolution of AI; it is actively shaping it. From addressing the governance challenges posed by AI to navigating the complex landscape of regulations, organizations are ready to embrace AI’s potential while safeguarding against its risks.
✅As the global community struggles with the complexities of AI ethics, transparency, and security, 2024 emerges as the year when AI, with its immense capabilities and challenges, takes centre stage, compelling us to craft a future where AI enriches human experiences responsibly and ethically.