Large Language Models - all you need to know
What are Large Language Models and Generative AI?
To give you an overview first it is important to understand the context:
- Generative AI is a subfield of the Artificial Intelligence with massive growth since ChatGPT in 2022. There are many subfields in AI, such as Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, and many more.
- Generative AI refers to Deep Learning models capable of generating new content like text, images, video, audio, graphs and more.
- Large Language Models (LLMs) are "foundation models" that use Deep Learning, Natural Language Processing (NLP) and Natural Language Generation (NLG) tasks. They are the underlying models behind generative AI that are used in chatbots like ChatGPT or Bard from Google. As you might know LLMs are used in use cases like writing, answering questions, translating languages, and coding. LLMs are pre-trained on extensive amounts of text, wikipedia, data from the internet. For the pre-training they often use techniques like fine-tuning, In-context training or Zero/One/Few-shot learning.
- You can imagine something like a huge brain with neurons that learn based on stimulations from its inputs it gets and was shaped (so-called finetuned) to execute certain specific tasks better (and not so generic tasks)
- Important to remember:
- LLMs mimic human language use but don't truly understand it.
- They basically predict what comes next in a sentence.

What are Foundation Models?
If you want to Imagine you're building a house. Before you can put up walls or install a roof, you need a solid foundation that will support everything else. Foundation Models in artificial intelligence work in a similar way. These models are designed to learn from a huge amount of data about the world, sort of like a base layer of understanding. They're trained on lots of unstructured information (like text, images, or sounds from the internet) without any specific task in mind. Think of this as a foundation learning a bit about everything, from cooking recipes to sports, from science facts to popular jokes, and so much more.
Then, when we want our AI to do something specific – like answer a question, translate a sentence, or identify an object in a photo – we can build upon that foundation. Because the foundation model has already learned a lot about the world, it can be adapted or "fine-tuned" to do many different tasks across various domains.
In short, Foundation Models serve as a general basis for learning, upon which more specific, task-oriented AI models can be built. Just like a good house foundation supports many different kinds of rooms and designs, a good foundation model can support many different AI applications.
What are Large Language Model Use Cases?
LLMs can be used for process automation, and to boost efficiency in many tasks human execute, such as creating text, summarisation, generating ideas, translation, detecting patterns etc. Please find some use cases which can be better performed with LLMs than before:
- Text summarization
- Text generation
- Sentiment analysis
- Content creation
- Chatbots, virtual assistants, and conversational AI
- Named entity recognition
- Speech recognition and synthesis
- Image annotation
- Text-to-speech synthesis
- Spell correction
- Machine translation
- Recommendation systems
- Fraud detection
- Code generation (see Copilot)
Need support with your Generative Ai Strategy and Implementation?
🚀 AI Strategy, business and tech support
🚀 ChatGPT, Generative AI & Conversational AI (Chatbot)
🚀 Support with AI product development
🚀 AI Tools and Automation

