- Hem
- Böcker
- Kurslitteratur
- Teknik, Industri & IT
- Quick Start Guide to Large Language Models (häftad, eng)
Quick Start Guide to Large Language Models (häftad, eng)
The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products Large Language Models (LLMs) like Llama 3, Claude 2, and t...
559 kr
605 kr
Slut i lager
- Fri frakt
Fri frakt över 299:-
Snabb leverans
Alltid låga priser
Produktbeskrivning
The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products Large Language Models (LLMs) like Llama 3, Claude 2, and the GPT family are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them.
In Quick Start Guide to Large Language Models, Second Edition, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems. Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, hands-on exercises, and more.
Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, prompting, fine-tuning, performance, and much more. The resources on the companion website include sample datasets and up to date code for working with open and closed source LLMs such as those from OpenAI (GPT-4 and GPT-3.5), Google (BERT, T5, and Gemma), X (Grok), Anthropic (The Claude family), Cohere (the Command family), and Meta (BART and the LLaMA family).
Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and moreUse APIs and Python to fine-tune and customize LLMs for your requirementsBuild a complete neural/semantic information retrieval system and attach to conversational LLMs for building retrieval-augmented generation (RAG) chatbots and AI AgentsMaster advanced prompt engineering techniques like output structuring, chain-of-thought prompting, and semantic few-shot promptingCustomize LLM embeddings to build a complete recommendation engine from scratch with user data that out performs out of the box embeddings from OpenAIConstruct and fine-tune multimodal Transformer architectures from scratch using open source LLMs and large visual datasetsAlign LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF) to build conversational agents from open source models like Llama 3Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mindDiagnose and optimize LLMs for speed, memory, and performance with quantization, probing, benchmarking, and evaluation frameworks
In Quick Start Guide to Large Language Models, Second Edition, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems. Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, hands-on exercises, and more.
Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, prompting, fine-tuning, performance, and much more. The resources on the companion website include sample datasets and up to date code for working with open and closed source LLMs such as those from OpenAI (GPT-4 and GPT-3.5), Google (BERT, T5, and Gemma), X (Grok), Anthropic (The Claude family), Cohere (the Command family), and Meta (BART and the LLaMA family).
Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and moreUse APIs and Python to fine-tune and customize LLMs for your requirementsBuild a complete neural/semantic information retrieval system and attach to conversational LLMs for building retrieval-augmented generation (RAG) chatbots and AI AgentsMaster advanced prompt engineering techniques like output structuring, chain-of-thought prompting, and semantic few-shot promptingCustomize LLM embeddings to build a complete recommendation engine from scratch with user data that out performs out of the box embeddings from OpenAIConstruct and fine-tune multimodal Transformer architectures from scratch using open source LLMs and large visual datasetsAlign LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF) to build conversational agents from open source models like Llama 3Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mindDiagnose and optimize LLMs for speed, memory, and performance with quantization, probing, benchmarking, and evaluation frameworks
Format | Häftad |
Omfång | 352 sidor |
Språk | Engelska |
Förlag | Pearson Education (US) |
Utgivningsdatum | 2024-06-18 |
ISBN | 9780135346563 |
Specifikation
Böcker
- Häftad, 352, Engelska, Pearson Education (US), 2024-06-18, 9780135346563
Leverans
Vi erbjuder flera smidiga leveransalternativ beroende på ditt postnummer, såsom Budbee Box, Early Bird, Instabox och DB Schenker. Vid köp över 299 kr är leveransen kostnadsfri, annars tillkommer en fraktavgift från 29 kr. Välj det alternativ som passar dig bäst för en bekväm leverans.
Betalning
Du kan betala tryggt och enkelt via Avarda med flera alternativ: Swish för snabb betalning, kortbetalning med VISA eller MasterCard, faktura med 30 dagars betalningstid, eller konto för flexibel delbetalning.
Specifikation
Böcker
- Format Häftad
- Antal sidor 352
- Språk Engelska
- Förlag Pearson Education (US)
- Utgivningsdatum 2024-06-18
- ISBN 9780135346563