A Development Approach to
Generative AI And llm-based
software applications’
deployment
Harin Raphael
EEE S5
Roll no.58
Reg no.2301033279
Contents
1. Abstract 10.Criteria for Monitoring
2. Introduction Generative AI
3. Generative AI Models
4. Domains of 11.Generative AI Project
Generative AI life cycle
5. Limitations of Gen AI/LLM Testbed
Generative AI 12. LLMOps Workflow
6. Large Language 13.The Proposed Forward
Models and Back
7. Comparison of LLMs Systematic Approach
Abstract
Focus: Generative Artificial Intelligence
(GAI) and Large Language Models (LLMs) like
GPT-3 and BERT.
Approach: “Forward and Back Systematic
Approach” for executing GAI projects.
Strategies: Leveraging Private
Generalized LLM APIs, in-context learning,
and fine-tuning.
Goal: Optimize GAI models for specific
applications and improve project outcomes.
Introduction
Generative AI: Uses statistics and deep
learning to generate artificial content
(text, images, videos, audio).
Objectives: Introduce Generative AI,
discuss LLMs, and develop a systematic
approach for deploying Generative AI
projects.
Generative AI
Generative Al refers to a
branch of artificial intelligence,
abbreviated as AI, which
focuses on creating models and
algorithms capable of
generating new or original
content including images, text,
music, and even videos.
Domains of Generative AI
Text Generation: Audio Generation:
Generates human-like texts. Creates music, sounds,
Models: GPT-3 or human-like voices.
Applications: Content Models: Wave GAN
creation, chatbots, and code Applications:
generation. Advertisements, videos,
and background tracks.
Image Generation:
Constructs realistic images. Video Generation: AI
Models: GANs creates videos by
Applications: art, design, combining existing
Limitations of Generative AI
Output errors: Output errors due to
probabilistic algorithms.
Potential misinformation:
Indistinguishable from authentic content,
potentially presenting misinformation.
Hallucinations: generated content that
is plausible but incorrect or nonsensical.
Large Language
Models (LLMs)
Large Language Models (LLMs)
refer to a class of advanced
artificial intelligence models
specifically designed to process
and interpret human language
at an extensive scale
Comparison of LLMs
LLM Description Model Source
Type size Type
Generates human-like text,
proficient in answering questions,
GPT creating poetry, and writing code. 175 billion Closed
parameters source
Captures context from both
directions, adept at
BERT understanding language nuances 110-340 million Open
and relationships. parameters source
T5
Approaches NLP tasks as text-to- Up to 11 billion
text problems, outstanding parameters Open
performance in translations, source
RoBER summarization, and question- Varied
Three LLM Deployment Strategies
Private Generalized LLM API
- Allows enterprises to access LLMs while keeping data
private.
- Benefits: Customization, security, scalability.
In-Context Learning (ICL)
- LLMs provide predictions based on contexts
and few training examples.
- Workflow: Data Preprocessing, Prompt Construction,
Prompt Execution.
- Advantages: Easier than fine-tuning, no need for dedicated
ML team or infrastructure.
The Comparison of Complexity
and Cost for Three Approaches
of LLM
Approaches Complexi Cost
ty
Private Generalized Variable (Low to
LLM API High)
Low
Design strategy to Medium
Medium
enable LLMs for
different cases: ICL
High
High
Fine-tuning
criteria for monitoring
Generative AI models
1. Correctness: Accuracy and alignment with
desired outcomes.
2. Performance: Fluency, coherence, and relevance.
3. Cost: Computational resources and expenses.
4. Robustness: Handling diverse inputs and adapting
to content.
5. Prompt Monitoring: Aligning prompts with
ethical guidelines.
6. Latency: Response time and timely interactions.
Generative AI Project
Life Cycle
Gen AI/LLM Testbed
A controlled environment for researching, testing, and
evaluating LLMs, focusing on innovation, ethics, safety,
and performance.
LLMOps Workflow
1. Data: Collect and preprocess training data.
2. Model Development: Utilize an open-source
foundation model and fine-tune it on
specific data.
3. Training/Fine-Tuning: Adjust model weights for
specific tasks.
4. Trained Model: Customized model for specific tasks.
5. Deployment & Usage:
- Deploy model to environment (self-hosted
or hosted).
The Proposed Forward and Back
Systematic Approach to do a Generative
AI
1. Define Objectives: Identify key areas where LLMs can add
value and align with broader goals.
2. Select LLM: Evaluate and choose the most suitable LLM based
on capabilities, scalability, and
compatibility.
3. Collect Data: Gather relevant, high-quality data that aligns
with objectives and model capabilities.
4. Customize LLM: Tailor the LLM to specific requirements
through fine-tuning, parameter adjustment,
and domain-specific knowledge integration.
5. Deploy LLM: Launch the customized LLM into a real-world
environment, ensuring thorough testing
and integration.
Conclusion
- Generative AI, particularly Large Language Models (LLMs), has
significant technological
sophistication.
- The proposed “Forward and Back Systematic Approach”
enhances text generation accuracy,
especially with private or specific raw data.
- The approach tailors LLMs to specific, case-driven needs
through strategies like Private
Generalized LLM APIs, ICL, and fine-tuning.
Future Directions
- Experimental validation of proposed approaches.
REFERENCE
1. Ng, A. and Jordan, M. (2001). On discriminative vs. generative classifiers: A comparison of
logistic regression and naïve Bayes.In Advances in Neural Information Processing Systems,
volume 14, pages 841-848.
2. Hu, L. (2023). Generative AI and Future. Retrieved on January 23 from
https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2.3. Jovanović, M. (2023).
Generative Artificial Intelligence: Trends and Prospects.
https://www.computer.org/csdl/magazine/co/2022/10/09903869/1H0G
6xvtREk.0.1109/MC.2022.3192720.
4. Abukmeil, M., Ferrari, S., Genovese, A., Piuri, V., & Scotti, F. (2021). A survey of
unsupervised generativemodels for exploratory data analysis and representation learning.
Acm computing surveys (csur), 54(5), https://doi.org/10.1145/3450963
5. Gui, J., Sun, Z., Wen, Y., Tao, D., & Ye, J. (2021). A review on generative adversarial
networks: Algorithms, theory, and applications. IEEE Transactions on Knowledge and Data
Engineering. Doi: 10.1109/TKDE.2021.3130191. https://doi.org/10.1109/TKDE.2021.3130191
Thank you