Generative AI has already started changing the world we live in. ChatGPT, Dall-E, and other popular AI-powered tools wow users with their ability to design graphics, polish up a resume, plan a vacation budget, find errors in code, and more. From art generators to language models, AI tools are becoming ubiquitous and have the potential to revolutionize many industries. However, this technological advancement has also brought about concerns and challenges. From existential anxieties about what this means for humanity to the arms race to dominate the field, the future of generative AI is uncertain. But there's two things we're willing to bet on: The importance of a human-in-the-loop system for validation and oversight, and the potential of smaller, specialized models to be a breakout leader of the pack.
Off to the Races
While generative AI is still a work in progress, its applications are expanding across industries. AI research started in the 1950s, but there hasn't been a significant upheaval in the industry since the 1980s. The public releases of powerful new tools over the last few years have triggered another mad scramble to build bigger and better versions of generative AI tools. Companies applying generative AI to their existing products and services at different scales – from smaller endeavors like SlackGPT (which facilitates asking Slack-related questions) to colossal projects like Google’s Bard (which you can ask pretty much anything). The question of who will ultimately come out on top is still playing out, but it looks like the underdogs have a fighting chance. As one of those underdogs, Ringer Sciences has intentionally built our offerings to be nimble and specific. In this David vs. Goliath fight, the small and adaptable companies that can move quickly will be carrying the slingshot. As one Google executive shared in a recently leaked memo: “Open-source models are faster, more customizable, more private, and pound-for-pound more capable.”
Competition breeds innovation, but the popularity boom of tools like ChatGPT has also raised our societal anxieties about technology, and ethical questions over intellectual property. We live in a time of unprecedented surveillance and personal data collection, and now it seems that technology companies and AI are coming for our jobs. What was born in the imagination of sci-fi writers has now grown into our reality, but is there real cause for concern? Take comfort: As our CEO, Dr. Yash Gad, recently said – we believe human oversight is essential for the best outcome with these tools.
Human-in-the-Loop
Despite the remarkable capabilities of large language models (LLMs) like GPT, they have inherent limitations. To enhance AI's predictive abilities, engineers train these systems on large datasets ("data lakes"), which include sources like books, articles, and web pages filled with text, creating Large Language Models (LLMs). The size of the dataset directly influences the quality of LLM outputs, with larger datasets typically yielding better results. While tools like GPT excel at answering logical prompts, they struggle to understand symbolic reasoning. Since generative AI can't understand what is "true", its outputs require external verification. Complicating AI's ability to assess accuracy is the inconsistency of available data in these internet-powered data lakes; the internet is full of misinformation, misstatements, and conflicting reports. Outputs can only be as good as the dataset.
Beyond simply fact-checking outputs, the training of LLMs for specialized purposes requires a human-in-the-loop system, where humans guide the AI by validating or negating its predictions. The process of building smaller generative AI models can bring in more opportunities for human review. Our data scientists take a more incremental approach to tool building, so that human intuition is never lost in the process. By bringing together smaller data sets (“data puddles”) and a deeper level of human review, Ringer Sciences can build tools and workflows for our clients that are ultimately more powerful and more accurate.
Data Puddles > Data Lakes
Tools like ChatGPT, Bard, GPT-4, Dall-E, and beyond are powerful, but too big – and fragile (meaning: more susceptible to bias). Ultimately they are good for ideation, not execution. Strategic companies will capitalize on what these AIs do well already, which essentially falls into the category of pattern recognition. Content creators can take a piecemeal approach, from initial brainstorming to formulating a CTA (verified externally, of course). Developers can quickly check for errors deep in complex code. The transformation of the digital customer experience with tools like Chatbots, adding personalization, and improving ROI has already begun. However, the most competitive generative AI in the immediate future will be fed with data puddles instead of data lakes.
Smaller and more specialized generative AI models that are faster to run, cost-effective, and serve a specific use case are growing in demand. Already we've seen a boon of new AI-powered SaaS offerings from companies that expand on their existing specialties. As more businesses catch on to the powerful new ways they can use their datasets, we expect a convergent rise in demand to build, categorize, and clean data for their own puddles.
We recommend leaving it to the experts, though. We can’t overstate the need for proper data governance and best practices to ensure the desired results. Thus, one of our main areas of focus for AI services has become working with clients to curate their datasets and refine models. These data puddle fed models will give companies more accuracy and security, with personalized insights and proprietary rights to the outputs.
chatGPT for Executives
The proliferation of specialized AI models opens up opportunities for companies to expand their offerings. At Ringer Sciences, we’re doing exactly that. This month we launched our first AI-powered offering: Executive Signals Platform (ESP) GPT Post Generation. This AI content generator was trained on our proprietary data puddle from ESP of over 800 executive LinkedIn accounts and their posts across all major industries and sectors. The ESP GPT tool works similarly to chatGPT, where you can input your topic prompt and parameters, and the tool will output and regenerate responses in the form of a Linkedin post draft.
ESP has already empowered our B2B and executive clients with unique, actionable audience insights to inform how they build campaigns, perform outreach, and react to emerging industry trends. We built our data puddle upon this powerful platform, honing in on our snapshots of executives’ social post content, volume, frequency, and engagement to train the GPT to write posts that provide valuable insights and perform well with audiences. This service empowers executives to create customized social media posts on any desired topic based on their industry, job title, or post engagement. Ideal for C-levels who are new to building personal brands on social media, but unsure how to begin or what will resonate with their desired audience.
Our goal is to continue rolling out more AI-powered tools that expand on what we already do best: Improving your brand resonance on social media and connecting with your target audiences. It’s clear that AI has ushered in a new era, altering our work dynamics and our relationship with technology, but we’re excited to see where these advancements, with proper human oversight, are able to take the future of business and beyond.
Comments