Site icon NerdRabbit

Top 20 AI Buzzwords to Know for Your Next Nerdy Cocktail Party

A group of rabbits wearing suits sit around a table drinking cocktails.

In the fast changing world of technology, nothing is changing faster than artificial intelligence (AI), or the associated buzzwords! At NerdRabbit we help clients find great talent with specific technology skill sets, so we get to hear all the latest terms. I have compiled a list of the top 20 we are hearing with a brief description of each. After reading this blog, you will be prepared for your next nerdy cocktail party. Imagine the reaction you will get when you ask opinions on the best LLM, or ideal use cases of a random forest algorithm!

1. Artificial intelligence (AI)

Back to top ↑

Artificial Intelligence, or AI,  is the capability of computers to perform tasks that have typically required human intelligence. The term originated in 1955 at Stanford, defined as “the science and engineering of making intelligent machines.”

Examples include learning from experience, solving complex problems, understanding written or spoken language, writing blogs, and creating new content. It involves programming and training machines to think and act like humans – hopefully, the smart ones and not the ones in the typical Tik Tok video.  AI technology can rapidly process vast amounts of data in ways that go way beyond human capabilities, with the goal of recognizing patterns, making decisions, and exhibiting human-like judgment. AI can even create pictures – like all the ones in this blog!

For the below image, I used DALL-E from OpenAI. This AI program creates images from a simple text prompt, but it often requires several attempts and edits to get what I am looking for. From my experimentation, ChatGpt does a much better job of creating a verbal response on a first attempt than DALL-E does in creating a visual response. But some of the images, like the one below, seem really good and would have taken hours for a human to create and only took a few seconds for AI to create.

2. Natural Language Processing (NLP)

Back to top ↑

Amazon Web Services (AWS) defines NLP as “a machine learning technology that gives computers the ability to interpret, manipulate, and comprehend human language”. NLP enables computers to process human language in the form of text or voice data and to ‘understand’ its full meaning and even understand the intent.    

NLP drives programs that can easily translate text from one language to another, respond to spoken commands, and summarize large volumes of text in real-time. NLP drives some of the consumer tech we are all aware of, like Siri, Alexa, and Voice-activated GPS systems. The exciting newer use cases of NLP are in enterprise solutions that help streamline business operations and increase employee productivity.

3. Generative AI

Back to top ↑

Generative AI is exactly what it sounds like – artificial intelligence that generates new content. Generative AI can learn from existing data (training data) to generate new, completely unique data that reflects the characteristics of the training data. It can produce a variety of content, such as text, images, music or even computer code. Yes – the computers can now program themselves!  GenAI platforms use prompts from the user to generate content. As an example, I wanted to show a frustrated robot with writer’s block, so I used OpenAI Dalle-2 with the prompt: “Picture of a frustrated robot writing a book”.

This image is completely unique, “learned” by seeing millions of other images of robots, writers, books, and what it looks like to be frustrated. Other GenAI platforms can write poems, create your resume, summarize a paper, or suggest a meeting agenda. But, as McKinsey Digital points out, there are multiple risks around intellectual property, privacy, fairness, and security that businesses should take into account when integrating GenAI tools.

4. Large Language Model (LLM)

Back to top ↑

Large Language Model – LLM is a type of generative artificial intelligence (GenAI) with incredible linguistic capabilities. Trained on massive amounts of data, it uses sophisticated algorithms to excel in understanding and generating human-like language. “Simply” by ingesting existing text and then finding patterns and connections, it comes to understand language styles, grammar, and context. As a result, it can perform several diverse tasks, such as text generation, completion, translation, sentiment analysis, and summarization. These models have widespread applications in virtual assistants, chatbots, content generation, and language translation.

In a very short period of time, hundreds of LLMs have hit the market, and all have slight differences not only in the data they were trained on but also in the complicated algorithms that analyze the data. This results in different LLMs being ideal for some use cases while others may have an advantage in a more specific domain. I won’t go into the specifics of each, but the 5 I hear the most about currently are as follows:

  • OpenAI –  GPT-3.5 and GPT-4.0: available through the ChatGPT chatbot and through their API for use in other applications.
  • Google – PaLM 2: available through the Bard chatbot and being integrated with multiple Google products
  • Anthropic – Claude-2: Available via chatbot and API
  • AWS: The public cloud leader has a suite of LLM-based tools with more specific use cases, including Comprehend, Kendra, Lex, Polly, Rekognition, SageMaker, Textract, Bedrock, and CodeWhisperer
  • Meta – Llama-2: The most recent LLM from one of the tech giants, it is open source and also in partnership with Microsoft.  

5. Big Data

Back to top ↑

Big Data is a somewhat subjective term used to describe lots of data. Typically it refers to data that is just too much and being generated too quickly for a company to process with existing systems. If your company struggles to keep up with your growing data and associated infrastructure needs, you have probably described this as Big Data. The data is often characterized by the 4 V’s: Volume, Velocity, Variety, and Veracity. You are dealing with Big Data if you have lots of data from different sources and structures accumulating quickly with varying degrees of accuracy.

Artificial intelligence can be used to gain insights from such huge datasets where traditional computing has challenges. There is no definition of a minimum size – for some, 100 TB is “big data”, whereas, for other companies, it might be exponentially higher. The term often is used interchangeably to describe the data, the data and infrastructure to process it, and the tools and techniques companies need to use to make business decisions from all the data.   

6. Neural Networks

Back to top ↑

Neural Networks are a type of machine learning model designed to mimic the working of the human brain. Training data, often millions of examples, are fed into an input layer, where each represents a specific feature of the data. The neurons in subsequent hidden layers process this information by assigning weights to the connections allowing the network to capture patterns and relationships within the data. During the training process, the network’s weights are adjusted iteratively, improving the ability of the machine learning model to make accurate predictions.

Large Neural Networks may contain billions of parameters, allowing these models to generate a wide range of answers to input questions. But this astonishing ability is often referred to as a “black box”. A user may feed a question, and the model produces an answer. What happens in the middle with the billions of weighted connections, nodes, and neurons is a complete mystery – much like the human brain!

7. Machine learning

Back to top ↑

Machine learning is a subset of artificial intelligence (AI) where computers learn and improve their performance on a specific task as more data becomes available without being explicitly programmed. In traditional programming, developers write code to produce consistent results. As a simple example, if you use the sum function in Microsoft Excel, it will add up a series of numbers and will do it the same way with the same answer every time. However, in machine learning, the computer learns patterns and rules from data, enabling it to make predictions and decisions or identify patterns without being explicitly programmed for each specific scenario.

For example, if you train a model with thousands of pictures of cats, it will learn to identify cats in any new picture. This is how your iPhone allows you to search for “cat” in your photo app and then filter to display all photos of cats on your phone.  Unlike traditional programming, the results are predictions that increase in accuracy over time. Machine learning starts with identifying a large enough data set, which can be structured or unstructured, providing the basis for learning. Features are then extracted from the data, representing specific, measurable characteristics. The model, such as decision trees or neural networks, captures patterns and relationships within the data, continually adjusting as more training data is processed.   

8. GPT

Back to top ↑

GPT stands for “Generative Pre-trained Transformer, and is a type of Large Language Model (LLM). Developed by OpenAI, current versions of the model are GPT-3.5 and GPT-4. ChatGPT is a chatbot also developed by OpenAI to leverage these models for generating text or even computer code. In this case “Generative” means the model generates new content if given a prompt. As an example, I prompted ChatGPT to “write a Haiku about machine learning. Here was the response:

“Pre-trained” means that these models are initially trained on a large dataset before being fine-tuned on specific tasks. This gives GPT the ability to learn language patterns to perform specific tasks, such as language translation, question-answering, text completion, and more.

9. Transformer

Back to top ↑

Finally, the “T” in GPT stands for Transformer, which is a specific type of Neural Network uniquely suited for language processing. Transformer models apply an evolving set of mathematical techniques to detect any connections that learn how one piece of data may depend on another. A Transformer model is characterized by its ability to transform an input sequence into an output sequence. In ChatGPT, it is easy to visualize this instantly – type something in the prompt dialogue box, and it nearly instantly transforms your input into a unique output.

10. Unstructured, structured, and semi-structured data

Back to top ↑

Unstructured, Structured, and Semi-structured data are the most common classifications of data that companies may analyze using traditional tools or AI. AI is especially useful for unstructured data.  

Structured Data refers to data that is organized in a defined and specific format. Such data is typically stored in relational databases or spreadsheets in a tabular format and can be easily queried, analyzed, and processed using database management systems or query languages such as SQL. Common structured data includes ERP systems, financial transactions, CRM data, and payroll data.   

Unstructured Data, on the other hand, does not follow a specific data model and lacks a predefined structure. It is essentially raw data that does not fit into any traditional tabular format. These data types pose challenges for traditional database management systems, but machine learning and AI allow companies to extract meaningful insights from information that would otherwise be impossible to analyze. Examples of unstructured data include:

  • Textual data such as documents, emails, social media posts, blogs, and internet content.  
  • Multimedia data like images, audio recordings, and videos.
  • Sensor data from IoT devices.

11. Chatbot

Back to top ↑

Chatbot describes a type of computer program designed to simulate human conversation and provide solutions to customer queries. As an example, a customer service chatbot may welcome users to a website, assisting them in resolving their issues. Additionally, chatbots can facilitate tasks like logging service requests, sending emails, filling out applications, or giving product descriptions.

Chatbots have evolved from simple keyword-based programs to sophisticated systems utilizing Artificial Intelligence and Natural Language Processing. There are two main types: declarative chatbots with scripted responses for structured conversations and predictive chatbots, like virtual assistants, using advanced technologies to understand user behavior and provide personalized answers and recommendations.

12. Sentiment analysis

Back to top ↑

Sentiment Analysis is a natural language processing (NLP) technique used to determine the emotional tone or sentiment from a portion of text. The intent is to try to determine the attitude, emotions, or feelings of the author. Commonly, the analytics will classify responses as positive, negative, or neutral, although more granular sentiment classifications are also possible. This analysis can be applied to social media posts, product reviews, texts in a chatbot, or email replies. An example would be if a customer enters, “WTF is wrong with your shipping department,” the sentiment analysis would classify this as negative. This can help escalate issues to the correct department, gain real-time customer feedback on a new product launch, or to gauge employee engagement and satisfaction. 

13. Deep learning

Back to top ↑

Deep Learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. The “depth” refers to the number of node layers. A single-layer neural network would not typically be described as Deep Learning, where a Neural network with at least three node layers is considered a deep learning algorithm. Deep learning enables the use of huge petabyte-scale data sets of structured or unstructured data. In fact, a deep-learning model actually requires more data points to improve accuracy since little to no human intervention is required.

Each layer of the deep learning models passes data to the next layer, gradually refining the information and gaining a deeper understanding of patterns and features present in the dataset. The “deep” in deep learning refers to the presence of multiple layers in these neural networks, which allows them to handle complex problems and large datasets effectively. This architecture allows deep learning algorithms to have a remarkable ability to recognize patterns, which would often be difficult, if not impossible, for humans to identify. Current use cases include image and speech recognition, autonomous vehicles, and medical diagnosis. 

14. Robotic Process Automation (RPA)

Back to top ↑

Robotic Process Automation (RPA) is a software technology that makes it easy to build, deploy, and manage software bots that mimic the actions humans take in interacting with digital systems and software programs. Like some other IT terminology, RPA is misleading. While it sounds like something you would find on an assembly line, with a fleet of robots building something, it is more simply just a bunch of software bots that automate some of the mundane and repetitive tasks that a human might need to perform when interacting with a computer program or website. If a digital task is rule-driven and repetitive, it would be a good candidate for RPA. It is a form of business process automation that allows users to define instructions for bots to take automatically when triggered. RPA bots can automate any rule-based tasks: they can copy and paste data, move files and folders, scrape web browsers, extract data from documents, and fill in forms or applications.     

RPA typically deals with structured data, whereas AI is used to gather insights from semi-structured and unstructured data in text, documents, social media posts, and PDFs. AI can expand the capabilities of RPA by processing and converting the data to a structured form for RPA to understand and by using Natural Language Processing to allow users to communicate with the RPA software in unique ways.  

15. AI Models

Back to top ↑

AI Models are software programs trained to perform specific tasks when analyzing large data sets. Typically, the model is trained using specific data to recognize certain patterns and relationships, and it then uses this information to make predictions. AI models are based on complex, predefined algorithms trained to continuously improve accuracy based on available data to produce desired results.  Think of the models as having varying algorithms such that an input to the model will produce a predictable output based on the training data and algorithms used. The top four models I hear about are as follows, but there are many more.

  • Linear Regression
  • Neural Networks
  • Decision Trees
  • Random Forest

16. Computer Vision

Back to top ↑

Computer Vision is a branch of AI that enables computers to “see” images and videos, allowing them to understand visual data much like humans do, quickly and at scale. It utilizes deep learning and neural networks to make predictions about what it “sees” based on past data from which it has learned. An example of a prevalent use case is in facial and object recognition.

The development of self-driving vehicles might be the most ambitious computer vision project, which allows computers to rapidly interpret visual input from sensors and cameras to identify traffic elements, road features, pedestrians, and other potential safety issues. A promising development in computer vision allows rapid detection of any anomaly in an X-Ray or MRI, giving an accurate diagnosis based on past data from thousands or even millions of examples. These models would have exponentially more “experience” than any human doctor could hope for in a lifetime of work.  

17. Explainable AI

Back to top ↑

Explainable AI refers to a branch of AI where the methods that the models used to produce an output can be understood and explained by human experts. This contrasts with “black box” models of many generative AI models that sometimes produce almost magical responses but also will produce errors and hallucinations.   And with millions or billions of parameters, it might be impossible to know how the AI arrived at its answer. Explainable AI tries to solve this problem with the goal of building trust and transparency in AI and its predictions. Many AI models have been built around the workings of the human brain.

Like humans, the results can be unpredictable. No one has ever argued that we need to be able to explain exactly what made someone think in a certain way or respond in a peculiar fashion. Yet, maybe we want to hold AI to a higher standard than ourselves – hence the rise of explainable AI. Certain use cases are best candidates for Explainable AI – Medical Diagnosis and Criminal Justice are two prime examples where confidence in the system and the model are critical. There are sure to be interesting debates on this topic: which is better, a human doctor who can explain his opinion or an AI model that cannot?

18. Algorithms

Back to top ↑

Algorithms are sets of instructions to be followed to reach an answer in mathematics and computer science. Most think of the algorithms as a static set of instructions that would always process in the same way and produce the same results, like any of the mathematical formulas you learned in high school. AI Algorithms are much more complex and have the unique feature of producing different answers as they learn. AI Algorithms are often grouped into the following three categories:

  • Classification Algorithms are used to categorize data into classes based on its features. It learns patterns from labeled examples in a training set, allowing it to identify future examples from imputed data. These algorithms would also be described as supervised learning algorithms.
  • Regression Algorithms are another form of supervised learning algorithm used to predict values based on rules or patterns found in a dataset. A good example would be a mathematical algorithm to plot a line from some data points, thereby predicting future values.
  • Clustering Algorithms are unsupervised algorithms that learn patterns and useful insights from data without any guidance or labeled data sets. Generative AI uses such algorithms to find patterns in grammar, speech, or images to not only predict the next word in a sentence or edit a picture but to create entirely new content.

19. Internet of Things (IoT)

Back to top ↑

Internet of Things – IOT refers to the network of a myriad of connected devices and the technology that facilitates communication between these devices and other systems, the cloud,  or themselves. Thanks to the incredible advance of technology, we now have billions of connected devices. Your light switches, thermostats, toasters, cars, and even dog collars use sensors to collect data and report back to users via the internet. This massive amount of semi-structured data may yield a treasure trove of information to savvy marketers or users.   Devices can log endless streams of data, some useful and some not.

Until recently, mining useful insights from all this data was extremely difficult and expensive. New AI and machine learning technologies, along with cloud computing, have revolutionized this space. As an example, consider wearable devices that are now worn by nearly a third of US adults. These devices can not only be connected to your phone or computer to monitor health data but can also warn of risks of heart attack, stroke, or other issues based on data received over the internet and analyzed using AI and Machine Learning. 

20. Supervised and unsupervised learning

Back to top ↑

Supervised and unsupervised learning are the primary methods that AI systems are trained on data sets, with the main difference being the need for labeled training data. With supervised learning, input and output data are labeled in order to train or supervise the AI algorithms in classifying new data or predicting outcomes. As more data is processed, the results can get more accurate over time due to the feedback loop in the algorithm.

As an example, if an algorithm was trained on millions of past financial transactions, some labeled as fraudulent, it can learn to identify new fraudulent activity. With unsupervised learning, the data is not tagged or labeled at all, and algorithms use techniques to discover the structure of a dataset and group the data based on connections and similar characteristics. These algorithms may still require human validation or classification of the results. As an example, if you train an image recognition algorithm on millions of pictures of animals, it can accurately group all of the zebras together but may still require a human to label the data as “zebra”.   

About Mark Metz
Mark Metz is a serial entrepreneur and the founder of NerdRabbit. Mark is currently Chairman of Catalyst Tech Ventures, the parent company of NerdRabbit. During college at Furman University, Mark was named Southern Conference Swimmer of the Year twice, where he was an Olympic Trials Qualifier in breaststroke. In his spare time, Mark enjoys fishing, tennis, golf, photography, and technology.

Related articles

From On-Prem to Cloud: How to Build a Persuasive Business Case

Shifting your IT infrastructure from on-prem to the cloud can be a game-changer. However, convincing stakeholders to get on board requires more than just enthusiasm for the latest tech. You need a solid business case that outlines the tangible benefits, the cost...

Ensuring Cloud Security and Compliance with NerdRabbit’s Audits Pod

Ensuring the security and compliance of your cloud infrastructure is more critical now than ever. As companies migrate to the cloud, they face many challenges, from maintaining visibility and navigating complex solutions to evading sophisticated cyberattacks....

Exit mobile version