AWS AI Practitioner Training Course Training ½Òµ{
  Facebook: AWS AI Practitioner Training Course Training ½Òµ{
 
AWS AI Practitioner Training Course Training ½Òµ{
AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{  
AWS AI Practitioner Training Course Training ½Òµ{ AWS AI Practitioner Training Course Training ½Òµ{

·Q©w´Áª¾¹D³Ì·s½Òµ{¤ÎÀu´f¶Ü¡H
§K¶O­q¾\¥»¤¤¤ßªº½Òµ{³q°T¡I

¡i°ò¦´¶³q¸Ü«÷­µ½Òµ{¡j¶O¥Î¥þ§K¡I

AWS Certified AI Practitioner °ê»Ú»{¥iÃҮѽҵ{
½Òµ{²ºÙ¡GAWS AI Practitioner Training Course

  • ½Òµ{®É¶¡
  • ½Òµ{²¤¶
  • ¦Ò¸Õ¶·ª¾
  • ½Òµ{¤º®e
  • ¸Ô²Ó¤º®e

±À¤¶ªA°È¡G½Ò°ó¿ý¼vÀH®ÉÚ» (¦b®aÆ[¬Ý = 0%¡A¦b®ÕÆ[¬Ý = 100%)
¾Ç­û¨Ï¥Î WhatsApp¡B¹q¸Ü©Î¥»ºô­¶³ø¦W¡A«Ý¥»¤¤¤ß½T»{¤w¬°¾Ç­û¯d¦ì«á¡A§Y¥i¨Ï¥Î Âà¼Æ§Ö ú¥I¾Ç¶O¡A¹Lµ{²«K¡I
½s¸¹ ¦aÂI ¥i¹w¬ù¬P´Á¤Î®É¶¡ ¾Ç¶O§C¦Ü 9 §é  
GS2506AV ¤£­­
½Ð°Ñ¬Ý­Ó§O¦aÂI
$3,980 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
GS2506MV ©ô¨¤ ¤@¦Ü¤­¡G14:30 - 22:15   ¤»¡G13:45 - 21:30   ¤é¡G10:15 - 18:00 (¤½²³°²´Á¥ð®§) 95 §é«á¥u»Ý $3,781 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
GS2506OV Æ[¶í ¤@¦Ü¤­¡G14:15 - 22:00   ¤»¤Î¤é¡G12:15 - 20:00   (¬P´Á¤T¤Î¤½²³°²´Á¥ð®§) 9 §é«á¥u»Ý $3,582 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
GS2506PV ¥_¨¤ ¤@¦Ü¤­¡G14:15 - 22:00   ¤»¤Î¤é¡G12:15 - 20:00   (¬P´Á¤T¤Î¤½²³°²´Á¥ð®§) 9 §é«á¥u»Ý $3,582 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
GS2506SV ¨F¥Ð ¤@¦Ü¤­¡G14:15 - 22:00   ¤»¤Î¤é¡G12:15 - 20:00   (¬P´Á¤T¤Î¤½²³°²´Á¥ð®§) 9 §é«á¥u»Ý $3,582 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
GS2506YV ¤Ùªù ¤@¦Ü¤­¡G14:15 - 22:00   ¤»¤Î¤é¡G12:15 - 20:00   (¬P´Á¤@¡B¤T¤Î¤½²³°²´Á¥ð®§) 9 §é«á¥u»Ý $3,582 «ö¦¹³ø¦W¡GAWS AI Practitioner Training Course Training ½Òµ{
* ¦U¬F©²³¡ªù¥i¨Ï¥Î P Card ¥I´Ú  
¦p¨Ï¥Î P Card ú¥I¦Ò¸Õ¶O¡A¦Ò¸Õ¶O»Ý¥t¥[ 2.5% ¦æ¬F¶O  
¦b®Õ§K¶O¸ÕÚ»¡G ­º 1 ¤p®É¡A½Ð­P¹q»P¥»¤¤¤ß¾­û¹w¬ù¡C ¬d¬Ý¦U¦aÂI¹q¸Ü
©ô¨¤ 2332-6544
Æ[¶í 3563-8425
¥_¨¤ 3580-1893
¨F¥Ð 2151-9360
¤Ùªù 3523-1560
¦b®Õ§K¶O­«Ú»¡G ¾Ç­û¥i©ó¨É¥Î®É´Á¤º©ó³øŪ¦aÂI¤£­­¦¸¼Æ¦a­«¬Ý½Ò°ó¿ý¼v¡A±q¦Ó¥i¤ÏÂЭ«·Å¾ã­Ó½Òµ{¡I
¾É®v¸Ñµª¡G ¾Ç­û¥i©óÆ[¬Ý¬Y¤@½Ò°ó¿ý¼v«á´£¥X½Ò°óª½±µ¬ÛÃöªº°ÝÃD¡A½Òµ{¾É®v·|¼Ö·N¬°¾Ç­û¥H³æ¹ï³æªº§Î¦¡¸Ñµª¡I
½Ò®É¡G 18 ¤p®É
¨É¥Î®É´Á¡G 6 ¬P´Á¡C¶i«×¥Ñ±z±±¨î¡A¥i§Ö¥iºC¡C
½Ò°ó¿ý¼v¾É®v¡G Franco (¥ô±Ð½Òµ{²M³æ)
¦b®ÕÆ[¬Ý¡G ¸Ô±¡¤Î¥Ü½d¤ù¬q


¦a°Ï ¦a§} ¹q¸Ü ±Ð¨|§½µù¥U½s¸¹
©ô¨¤ ¤EÀs©ô¨¤¨È¬Ò¦Ñµó 109 ¸¹¡A¬Ò©ô°Ó·~¤j·H 18 ¼Ó 1802 - 1807 «Ç 2332-6544 533459
Æ[¶í ¤EÀsÆ[¶í¦¨·~µó 7 ¸¹¹ç®Ê¤¤¤ß 12 ¼Ó G2 «Ç 3563-8425 588571
¥_¨¤ ­»´ä¥_¨¤°¨Ä_¹D 41-47 ¸¹µØÄ_°Ó·~¤j·H 3 ¼Ó 01-02 ¸¹çE 3580-1893 591262
¨F¥Ð ·s¬É¨F¥Ð¥Ûªù¦w¸sµó 3 ¸¹¨Ê·ç¼s³õ 1 ´Á 10 ¼Ó M «Ç 2151-9360 604488
¤Ùªù ·s¬É¤Ùªù¤Ù³ß¸ô 2 ¸¹¤Ùªù¬fÄR¼s³õ 17 ¼Ó 1708 «Ç 3523-1560 592552
ª`·N¡I «È¤á¥²¶·¬d°Ý³øŪ¾Ç®Õªº±Ð¨|§½µù¥U½s¸¹¡A¥H½T»{¸Ó®Õ¬°µù¥U¾Ç®Õ¡A¥H§K»X¨ü¤£¥²­nªº·l¥¢¡I


AWS Certified AI Practitioner ½Òµ{±M¬°§Æ±æ²`¤J¤F¸Ñ¤H¤u´¼¯à (AI)¡B¾÷¾¹¾Ç²ß (ML) ¤Î¥Í¦¨¦¡ AI §Þ³NªºªB¤Í¦Ó³]ªº¡C½Òµ{¤£¶È²[»\¤F³o¨Ç§Þ³Nªº°ò¥»·§©À¡AÁÙ²`¤J±´°Q»P AWS ¬ÛÃöªºªA°È©M¤u¨ã¡AÅý¾Ç­û¯à°÷ÆF¬¡À³¥Î©ó¹ê»Ú³õ´º¤¤¡C½Òµ{¤º®e¥]¬A¡G

  • °ò¦ª¾ÃÑ¡G¥þ­±¤¶²Ð AI¡BML ©M¥Í¦¨¦¡ AI ªº°ò¥»·§©À¡B¤èªk¤Îµ¦²¤¡AÀ°§U¾Ç­û«Ø¥ß¤ã¹êªº²z½×°ò¦¡C
  • ¹ê»ÚÀ³¥Î¡G¾Ç²ß¦p¦ó¦b²Õ´¤º³¡´£¥X¬ÛÃö°ÝÃD¡A¨Ã¤F¸Ñ AI/ML ©M¥Í¦¨¦¡ AI §Þ³Nªº¾A¥Î±¡¹Ò¡A±q¦Ó§ó¦³®Ä¦a¸Ñ¨M¹ê»Ú·~°È¬D¾Ô¡C
  • §Þ³N¿ï¾Ü¡G«ü¾É¾Ç­û¦p¦ó®Ú¾Ú¨ãÅé¥Î¨Ò¿ï¾Ü¦X¾Aªº AI/ML §Þ³N¡A´£¤É¨Mµ¦©M¸Ñ¨M°ÝÃDªº¯à¤O¡C
  • ³d¥ô¨Ï¥Î¡G±j½Õ­t³d¥ô¦a¨Ï¥Î AI ©M ML §Þ³Nªº­«­n©Ê¡AÅý¾Ç­û¦bÀ³¥Î³o¨Ç§Þ³N®É¨ã³Æ¹D¼w·NÃÑ¡C
¥»¤¤¤ßªº AWS Certified AI Practitioner °ê»Ú»{¥iÃҮѽҵ{¥Ñ Franco Äw³Æ¦h®É¡Aºë¤ß½s±Æ¡C¥Ñ¤W°ó¡B·Å²ß¡B¦Ò¸Õ¬ã²ß¡B°µ¸ÕÃD¦Ü³Ì«á¦Ò¸Õ¡A§¡¬°§A«×¨­­q³y¡A§@¥X¦³¨t²Îªº½s±Æ¡C°È¨D¯u¥¿±ÐÃѧA¡A¤S¥O§A¦Ò¸Õ¤Î®æ¡C

½Òµ{¦WºÙ¡G AWS Certified AI Practitioner °ê»Ú»{¥iÃҮѽҵ{
- ²ºÙ¡GAWS AI Practitioner Training Course
½Òµ{®É¼Æ¡G ¦X¦@ 18 ¤p®É (¦@ 6 °ó)
¾A¦X¤H¤h¡G ¹ï¶³ºÝ AI §Þ³N¦³¿³½ìªº¤H¤h
±Â½Ò»y¨¥¡G ¥H¼sªF¸Ü¬°¥D¡A»²¥H­^»y
½Òµ{µ§°O¡G ¥»¤¤¤ß¾É®v¿Ë¦Û½s¼g­^¤å¬°¥Dµ§°O¡A¦Ó³¡¥÷­^¤å¦rªþ¦³¤¤¤å¹ï·Ó¡C
´£¨Ñ¼ÒÀÀ¦Ò¸ÕÃD¥Ø¡G ¥»¤¤¤ß¬°¾Ç­û´£¨Ñ¬ù 100±ø¼ÒÀÀ¦Ò¸ÕÃD¥Ø¡A¨C±ø¦Ò¸ÕÃD¥Ø§¡ªþ¦³¼Ð·Çµª®×¡C

¥u­n§A©ó¤U¦C¬ì¥Ø¨ú±o¦X®æ¦¨ÁZ¡A«K¥iÀò AWS ¹{µo AWS Certified Cloud Practitioner °ê»Ú»{¥iÃҮѡG

¦Ò¸Õ½s¸¹ ¦Ò¸Õ¦WºÙ
AIF-C01 AWS Certified AI Practitioner

¥»¤¤¤ß¬° VUE «ü©wªº AWS Certified AI Practitioner ¦Ò¸Õ¸Õ³õ¡A¾É®v·|¦b½Ò°ó¤WÁ¿¸Ñ¦Ò¸Õµ{§Ç¡C¦Ò¸Õ¶O¬° USD $100¡C




Domain 1: Fundamentals of AI and ML

  • Explain basic AI concepts and terminologies.
    • Define basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, fit, large language model [LLM]).
    • Describe the similarities and differences between AI, ML, and deep learning.
    • Describe various types of inferencing (for example, batch, real-time).
    • Describe the different types of data in AI models (for example, labeled and unlabeled, tabular, time-series, image, text, structured and unstructured).
    • Describe supervised learning, unsupervised learning, and reinforcement learning.

  • Identify practical use cases for AI.
    • Recognize applications where AI/ML can provide value (for example, assist human decision making, solution scalability, automation).
    • Determine when AI/ML solutions are not appropriate (for example, cost-benefit analyses, situations when a specific outcome is needed instead of a prediction).
    • Select the appropriate ML techniques for specific use cases (for example, regression, classification, clustering).
    • Identify examples of real-world AI applications (for example, computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting).
    • Explain the capabilities of AWS managed AI/ML services (for example, SageMaker, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly

  • Describe the ML development lifecycle
    • Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring).
    • Understand sources of ML models (for example, open source pre-trained models, training custom models).
    • Describe methods to use a model in production (for example, managed API service, self-hosted API).

  • Identify relevant AWS services and features for each stage of an ML pipeline (for example, SageMaker, Amazon SageMaker Data Wrangler, Amazon SageMaker Feature Store, Amazon SageMaker Model Monitor).
    • Understand fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training).
    • Understand model performance metrics (for example, accuracy, Area Under the ROC Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models.

Domain 2: Fundamentals of Generative AI

  • Explain the basic concepts of generative AI.
    • Understand foundational generative AI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models, multi-modal models, diffusion models).
    • Identify potential use cases for generative AI models (for example, image, video, and audio generation; summarization; chatbots; translation; code generation; customer service agents; search; recommendation engines).
    • Describe the foundation model lifecycle (for example, data selection, model selection, pre-training, fine-tuning, evaluation, deployment, feedback).

  • Understand the capabilities and limitations of generative AI for solving business problems.
    • Describe the advantages of generative AI (for example, adaptability, responsiveness, simplicity).
    • Identify disadvantages of generative AI solutions (for example, hallucinations, interpretability, inaccuracy, nondeterminism).
    • Understand various factors to select appropriate generative AI models (for example, model types, performance requirements, capabilities, constraints, compliance).
    • Determine business value and metrics for generative AI applications (for example, cross-domain performance, efficiency, conversion rate, average revenue per user, accuracy, customer lifetime value).

  • Describe AWS infrastructure and technologies for building generative AI applications.
    • Identify AWS services and features to develop generative AI applications (for example, Amazon SageMaker JumpStart; Amazon Bedrock; PartyRock, an Amazon Bedrock Playground; Amazon Q).
    • Describe the advantages of using AWS generative AI services to build applications (for example, accessibility, lower barrier to entry, efficiency, cost-effectiveness, speed to market, ability to meet business objectives).
    • Understand the benefits of AWS infrastructure for generative AI applications (for example, security, compliance, responsibility, safety).
    • Understand cost tradeoffs of AWS generative AI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models).

Domain 3: Applications of Foundation Models

  • Describe design considerations for applications that use foundation models.
    • Identify selection criteria to choose pre-trained models (for example, cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length).
    • Understand the effect of inference parameters on model responses (for example, temperature, input/output length).
    • Define Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock, knowledge base).
    • Identify AWS services that help store embeddings within vector databases (for example, Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon DocumentDB [with MongoDB compatibility], Amazon RDS for PostgreSQL).
    • Explain the cost tradeoffs of various approaches to foundation model customization (for example, pre-training, fine-tuning, in-context learning, RAG).
    • Understand the role of agents in multi-step tasks (for example, Agents for Amazon Bedrock).

  • Choose effective prompt engineering techniques.
    • Describe the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space).
    • Understand techniques for prompt engineering (for example, chain-of-thought, zero-shot, single-shot, few-shot, prompt templates).
    • Understand the benefits and best practices for prompt engineering (for example, response quality improvement, experimentation, guardrails, discovery, specificity and concision, using multiple comments).
    • Define potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking).

  • Describe the training and fine-tuning process for foundation models.
    • Describe the key elements of training a foundation model (for example, pre-training, fine-tuning, continuous pre-training).
    • Define methods for fine-tuning a foundation model (for example, instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training).
    • Describe how to prepare data to fine-tune a foundation model (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF]).

  • Describe methods to evaluate foundation model performance.
    • Understand approaches to evaluate foundation model performance (for example, human evaluation, benchmark datasets).
    • Identify relevant metrics to assess foundation model performance (for example, Recall-Oriented Understudy for Gisting Evaluation [ROUGE], Bilingual Evaluation Understudy [BLEU], BERTScore).
    • Determine whether a foundation model effectively meets business objectives (for example, productivity, user engagement, task engineering).

Domain 4: Guidelines for Responsible AI

  • Explain the development of AI systems that are responsible.
    • Identify features of responsible AI (for example, bias, fairness, inclusivity, robustness, safety, veracity).
    • Understand how to use tools to identify features of responsible AI (for example, Guardrails for Amazon Bedrock).
    • Understand responsible practices to select a model (for example, environmental considerations, sustainability).
    • Identify legal risks of working with generative AI (for example, intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations).
    • Identify characteristics of datasets (for example, inclusivity, diversity, curated data sources, balanced datasets).
    • Understand effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting).
    • Describe tools to detect and monitor bias, trustworthiness, and truthfulness (for example, analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI [Amazon A2I]).

  • Recognize the importance of transparent and explainable models.
    • Understand the differences between models that are transparent and explainable and models that are not transparent and explainable.
    • Understand the tools to identify transparent and explainable models (for example, Amazon SageMaker Model Cards, open source models, data, licensing).
    • Identify tradeoffs between model safety and transparency (for example, measure interpretability and performance).
    • Understand principles of human-centered design for explainable AI.

Domain 5: Security, Compliance, and Governance for AI Solutions

  • Explain methods to secure AI systems.
    • Identify AWS services and features to secure AI systems (for example, IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model).
    • Understand the concept of source citation and documenting data origins (for example, data lineage, data cataloging, SageMaker Model Cards).
    • Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity).
    • Understand security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit).

  • Recognize governance and compliance regulations for AI systems.
    • Identify regulatory compliance standards for AI systems (for example, International Organization for Standardization [ISO], System and Organization Controls [SOC], algorithm accountability laws).
    • Identify AWS services and features to assist with governance and regulation compliance (for example, AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor).
    • Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention).
    • Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements).

The course content above may change at any time without notice in order to better reflect the content of the examination.



1 Introduction
1.1 Goals
1.2 Out of scope for the exam

2 Fundamentals of AI and ML
2.1 Explain basic AI concepts and terminologies
2.1.1 Define basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, fit, large language model [LLM])
2.1.1.1 AI (Artificial Intelligence)
2.1.1.2 ML (Machine Learning)
2.1.1.3 Deep Learning
2.1.1.4 Neural Networks
2.1.1.5 Computer Vision
2.1.1.6 Natural Language Processing (NLP)
2.1.1.7 Model
2.1.1.8 Algorithm
2.1.1.9 Training
2.1.1.10 Inferencing
2.1.1.11 Bias
2.1.1.12 Fairness
2.1.1.13 Fit
2.1.1.14 Large Language Model (LLM)
2.1.2 Describe the similarities and differences between AI, ML, and deep learning
2.1.2.1 Similarities
2.1.2.2 Differences
2.1.3 Describe various types of inferencing (for example, batch, real-time)
2.1.3.1 Inferencing
2.1.3.2 Real-Time Inferencing
2.1.3.3 Batch Inferencing
2.1.3.4 Asynchronous Inferencing
2.1.3.5 Serverless Inferencing
2.1.3.6 Additional Points for Exam
2.1.4 Describe the different types of data in AI models (for example, labeled and unlabeled, tabular, time-series, image, text, structured and unstructured)
2.1.4.1 Labeled Data
2.1.4.2 Unlabeled Data
2.1.4.3 Tabular Data
2.1.4.4 Time-Series Data
2.1.4.5 Image Data
2.1.4.6 Text Data
2.1.4.7 Structured Data
2.1.4.8 Unstructured Data
2.1.5 Describe supervised learning, unsupervised learning, and reinforcement learning
2.1.5.1 Supervised Learning
2.1.5.2 Unsupervised Learning
2.1.5.3 Reinforcement Learning
2.2 Identify practical use cases for AI
2.2.1 Recognize applications where AI/ML can provide value (for example, assist human decision making, solution scalability, automation)
2.2.1.1 Assisting Human Decision Making
2.2.1.2 Solution Scalability
2.2.1.3 Automation
2.2.2 Determine when AI/ML solutions are not appropriate (for example, cost-benefit analyses, situations when a specific outcome is needed instead of a prediction)
2.2.2.1 High Cost vs. Expected Benefit
2.2.2.2 Need for Deterministic, Specific Outcomes
2.2.2.3 Availability and Quality of Data
2.2.3 Select the appropriate ML techniques for specific use cases (for example, regression, classification, clustering)
2.2.3.1 Regression
2.2.3.2 Classification
2.2.3.3 Clustering
2.2.4 Identify examples of real-world AI applications (for example, computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting)
2.2.4.1 Computer Vision
2.2.4.2 Natural Language Processing (NLP)
2.2.4.3 Speech Recognition
2.2.4.4 Recommendation Systems
2.2.4.5 Fraud Detection / Anomaly Detection
2.2.4.6 Forecasting & Demand Prediction
2.2.5 Explain the capabilities of AWS managed AI/ML services (for example, SageMaker, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly)
2.2.5.1 Amazon SageMaker
2.2.5.2 Amazon Transcribe
2.2.5.3 Amazon Translate
2.2.5.4 Amazon Comprehend
2.2.5.5 Amazon Lex
2.2.5.6 Amazon Polly
2.3 Describe the ML development lifecycle
2.3.1 Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring)
2.3.1.1 Components of an ML Pipeline
2.3.1.1.1 Data Collection
2.3.1.1.2 Exploratory Data Analysis (EDA)
2.3.1.1.3 Data Pre-Processing
2.3.1.1.4 Feature Engineering
2.3.1.1.5 Model Training
2.3.1.1.6 Hyperparameter Tuning
2.3.1.1.7 Evaluation
2.3.1.1.8 Deployment
2.3.1.1.9 Monitoring
2.3.2 Understand sources of ML models (for example, open source pre-trained models, training custom models)
2.3.2.1 Open Source Pre-Trained Models
2.3.2.2 Training Custom Models
2.3.2.3 Hybrid Approaches / Transfer Learning
2.3.2.4 AWS-Specific Resources and Options
2.3.3 Describe methods to use a model in production (for example, managed API service, self-hosted API)
2.3.3.1 Managed API Services
2.3.3.2 Self-Hosted API Services
2.3.4 Identify relevant AWS services and features for each stage of an ML pipeline (for example, SageMaker, Amazon SageMaker Data Wrangler, Amazon SageMaker Feature Store, Amazon SageMaker Model Monitor)
2.3.4.1 Data Collection & Ingestion
2.3.4.1.1 Amazon S3
2.3.4.1.2 AWS Glue / AWS Data Pipeline
2.3.4.2 Data Preparation & Exploration
2.3.4.2.1 Amazon SageMaker Data Wrangler
2.3.4.2.2 Amazon SageMaker Studio Notebooks
2.3.4.3 Feature Engineering & Feature Store
2.3.4.3.1 Amazon SageMaker Feature Store
2.3.4.4 Model Training & Tuning
2.3.4.4.1 Amazon SageMaker Training
2.3.4.5 Model Monitoring & Governance
2.3.4.5.1 Amazon SageMaker Model Monitor
2.3.4.5.2 Amazon SageMaker Model Cards
2.3.4.6 Additional Supporting Services
2.3.5 Understand fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training)
2.3.5.1 Experimentation
2.3.5.2 Repeatable Processes
2.3.5.3 Scalable Systems
2.3.5.4 Managing Technical Debt
2.3.5.5 Achieving Production Readiness
2.3.5.6 Model Monitoring
2.3.5.7 Model Re-Training
2.3.6 Understand model performance metrics (for example, accuracy, Area Under the ROC Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models
2.3.6.1 Model performance metrics
2.3.6.1.1 Accuracy
2.3.6.1.2 F1 Score, Precision, Recall
2.3.6.1.3 Area Under the ROC Curve (AUC)
2.3.6.2 Business Metrics to Evaluate ML Models
2.3.6.2.1 Cost per User/Inference
2.3.6.2.2 Development Costs
2.3.6.2.3 Customer Feedback
2.3.6.2.4 Return on Investment (ROI)

3 Fundamentals of Generative AI
3.1 Explain the basic concepts of generative AI
3.1.1 Understand foundational generative AI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models, multi-modal models, diffusion models)
3.1.1.1 Tokens
3.1.1.2 Chunking
3.1.1.3 Embeddings
3.1.1.4 Prompt Engineering
3.1.1.5 Transformer-Based LLMs (Large Language Models)
3.1.1.6 Foundation Models
3.1.1.7 Multi-Modal Models
3.1.1.8 Diffusion Models
3.1.2 Identify potential use cases for generative AI models (for example, image, video, and audio generation; summarization; chatbots; translation; code generation; customer service agents; search; recommendation engines)
3.1.2.1 Image Generation
3.1.2.2 Video Generation
3.1.2.3 Audio Generation
3.1.2.4 Summarization
3.1.2.5 Chatbots and Conversational Agents
3.1.2.6 Translation
3.1.2.7 Code Generation
3.1.2.8 Customer Service Agents
3.1.2.9 Search
3.1.2.10 Recommendation Engines
3.1.3 Describe the foundation model lifecycle (for example, data selection, model selection, pre-training, fine-tuning, evaluation, deployment, feedback)
3.1.3.1 Data Selection (Data Collection & Preparation)
3.1.3.2 Model Selection
3.1.3.3 Pre-Training
3.1.3.4 Fine-Tuning (Domain Adaptation)
3.1.3.5 Evaluation
3.1.3.6 Deployment
3.1.3.7 Feedback (Monitoring and Continuous Improvement)
3.2 Understand the capabilities and limitations of generative AI for solving business problems
3.2.1 Describe the advantages of generative AI (for example, adaptability, responsiveness, simplicity)
3.2.1.1 Adaptability
3.2.1.2 Responsiveness
3.2.1.3 Simplicity
3.2.1.4 Creativity and Innovation
3.2.1.5 Scalability and Cost-Efficiency
3.2.1.6 Domain Adaptation
3.2.2 Identify disadvantages of generative AI solutions (for example, hallucinations, interpretability, inaccuracy, nondeterminism)
3.2.2.1 Hallucinations
3.2.2.2 Interpretability Issues
3.2.2.3 Inaccuracy
3.2.2.4 Nondeterminism
3.2.3 Understand various factors to select appropriate generative AI models (for example, model types, performance requirements, capabilities, constraints, compliance)
3.2.3.1 Model Types and Architectures
3.2.3.2 Performance Requirements
3.2.3.3 Capabilities and Functionalities
3.2.3.4 Constraints and Resource Considerations
3.2.3.5 Compliance, Security, and Regulatory Constraints
3.2.3.6 Integration and Interoperability
3.2.4 Determine business value and metrics for generative AI applications (for example, cross-domain performance, efficiency, conversion rate, average revenue per user, accuracy, customer lifetime value)
3.2.4.1 Cross-Domain Performance
3.2.4.2 Efficiency
3.2.4.3 Conversion Rate
3.2.4.4 Average Revenue Per User (ARPU)
3.2.4.5 Accuracy
3.2.4.6 Customer Lifetime Value (CLV)
3.2.4.7 Integration with Business Processes
3.3 Describe AWS infrastructure and technologies for building generative AI applications
3.3.1 Identify AWS services and features to develop generative AI applications (for example, Amazon SageMaker JumpStart; Amazon Bedrock; PartyRock, an Amazon Bedrock Playground; Amazon Q)
3.3.1.1 Amazon SageMaker JumpStart
3.3.1.2 Amazon Bedrock
3.3.1.3 PartyRock ¡V An Amazon Bedrock Playground
3.3.1.4 Amazon Q (including Amazon Q in QuickSight and Amazon Q Developer)
3.3.2 Describe the advantages of using AWS generative AI services to build applications (for example, accessibility, lower barrier to entry, efficiency, cost-effectiveness, speed to market, ability to meet business objectives)
3.3.2.1 Accessibility and Lower Barrier to Entry
3.3.2.2 Efficiency and Speed-to-Market
3.3.2.3 Cost-Effectiveness
3.3.2.4 Ability to Meet Business Objectives
3.3.3 Understand the benefits of AWS infrastructure for generative AI applications (for example, security, compliance, responsibility, safety)
3.3.3.1 Security
3.3.3.2 Compliance
3.3.3.3 Shared Responsibility & Data Protection
3.3.3.4 Safety
3.3.4 Understand cost tradeoffs of AWS generative AI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models)
3.3.4.1 Responsiveness vs. Latency
3.3.4.2 Availability and Redundancy
3.3.4.3 Regional Coverage
3.3.4.4 Performance Tradeoffs
3.3.4.5 Custom Models vs. Pre-trained Models
3.3.4.6 Token-Based Pricing

4 Applications of Foundation Models
4.1 Describe design considerations for applications that use foundation models.
4.1.1 Identify selection criteria to choose pre-trained models (for example, cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length)
4.1.1.1 Cost
4.1.1.2 Modality
4.1.1.3 Latency
4.1.1.4 Multi-Lingual Support
4.1.1.5 Model Size
4.1.1.6 Model Complexity
4.1.1.7 Customization / Fine-Tuning Capabilities
4.1.1.8 Input/Output (I/O) Length
4.1.2 Understand the effect of inference parameters on model responses (for example, temperature, input/output length)
4.1.2.1 Temperature Parameter
4.1.2.2 Input/Output Length (Token Limit)

„X Other Considerations
4.1.3 Define Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock, knowledge base)
4.1.4 Identify AWS services that help store embeddings within vector databases (for example, Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon DocumentDB [with MongoDB compatibility], Amazon RDS for PostgreSQL).
4.1.4.1 Embeddings
4.1.4.2 Vector Databases
4.1.4.3 AWS Services for Storing Embeddings in Vector Databases
4.1.4.3.1 Amazon OpenSearch Service
4.1.4.3.2 Amazon Aurora
4.1.4.3.3 Amazon Neptune
4.1.4.3.4 Amazon DocumentDB (with MongoDB Compatibility)
4.1.4.3.5 Amazon RDS for PostgreSQL
4.1.4.3.6 Conclusions
4.1.5 Explain the cost tradeoffs of various approaches to foundation model customization (for example, pre-training, fine-tuning, in-context learning, RAG)
4.1.5.1 Pre-Training
4.1.5.2 Fine-Tuning
4.1.5.3 In-Context Learning
4.1.5.4 Retrieval Augmented Generation (RAG)
4.1.6 Understand the role of agents in multi-step tasks (for example, Agents for Amazon Bedrock)
4.1.6.1 Role of Agents in Multi-Step Tasks
4.1.6.2 Agents for Amazon Bedrock
4.2 Choose effective prompt engineering techniques
4.2.1 Describe the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space)
4.2.1.1 Introduction
4.2.1.2 Instruction
4.2.1.3 Negative Prompts (or Negative Instructions)
4.2.1.4 Model Latent Space
4.2.1.5 Applications in Prompt Engineering
4.2.2 Understand techniques for prompt engineering (for example, chain-of-thought, zero-shot, single-shot, few-shot, prompt templates)
4.2.2.1 Zero-Shot Prompting
4.2.2.2 Single-Shot Prompting
4.2.2.3 Few-Shot Prompting
4.2.2.4 Chain-of-Thought Prompting
4.2.2.5 Prompt Templates
4.2.2.6 Additional Considerations (e.g., Adversarial Prompting)
4.2.3 Understand the benefits and best practices for prompt engineering (for example, response quality improvement, experimentation, guardrails, discovery, specificity and concision, using multiple comments)
4.2.3.1 Benefits of Prompt Engineering
4.2.3.2 Best Practices for Prompt Engineering
4.2.3.3 Additional Points for the Exam
4.2.4 Define potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking)
4.2.4.1 Exposure (Data Leakage)
4.2.4.2 Poisoning
4.2.4.3 Hijacking (Prompt Injection)
4.2.4.4 Jailbreaking
4.2.4.5 General Limitations and Inconsistencies
4.2.4.5.1 Dependence on Context
4.2.4.5.2 Non-deterministic Behavior
4.2.4.6 Mitigation Strategies
4.3 Describe the training and fine-tuning process for foundation models
4.3.1 Describe the key elements of training a foundation model (for example, pre-training, fine-tuning, continuous pre-training)
4.3.1.1 Pre-Training
4.3.1.2 Fine-Tuning
4.3.1.3 Continuous Pre-Training
4.3.2 Define methods for fine-tuning a foundation model (for example, instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training)
4.3.2.1 Instruction Tuning
4.3.2.2 Adapting Models for Specific Domains (Domain Adaptation Fine-Tuning)
4.3.2.3 Transfer Learning
4.3.2.4 Continuous Pre-Training
4.3.2.5 Summary
4.3.2.6 Additional Points to Exam
4.3.3 Describe how to prepare data to fine-tune a foundation model (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF])
4.3.3.1 Data Curation
4.3.3.2 Data Governance
4.3.3.3 Data Size and Representativeness
4.3.3.3.1 Size
4.3.3.3.2 Representativeness
4.3.3.3.3 Note for Exam
4.3.3.4 Data Labeling
4.3.3.5 Reinforcement Learning from Human Feedback (RLHF)
4.3.3.6 Summary Example: Fine-Tuning a Medical Chatbot
4.3.3.6.1 Curation
4.3.3.6.2 Governance
4.3.3.6.3 Size & Representativeness
4.3.3.6.4 Labeling
4.3.3.6.5 RLHF
4.4 Describe methods to evaluate foundation model performance
4.4.1 Understand approaches to evaluate foundation model performance (for example, human evaluation, benchmark datasets)
4.4.1.1 Quantitative Evaluation via Benchmark Datasets
4.4.1.2 Human Evaluation
4.4.1.3 Hybrid Approaches
4.4.1.4 Summary Example
4.4.1.4.1 Benchmark Dataset Evaluation
4.4.1.4.2 Human Evaluation
4.4.1.4.3 Hybrid Strategy
4.4.2 Identify relevant metrics to assess foundation model performance (for example, Recall-Oriented Understudy for Gisting Evaluation [ROUGE], Bilingual Evaluation Understudy [BLEU], BERTScore)
4.4.2.1 Bilingual Evaluation Understudy (BLEU), n-grams
4.4.2.2 Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
4.4.2.3 BERTScore
4.4.2.4 Summary of Metric Use Cases
4.4.3 Determine whether a foundation model effectively meets business objectives (for example, productivity, user engagement, task engineering)
4.4.3.1 Define Business Goals & Success Criteria
4.4.3.2 Measure Productivity Improvements
4.4.3.3 Assess User Engagement
4.4.3.4 Evaluate Task Engineering Effectiveness
4.4.3.5 Summary Example
4.4.3.5.1 Productivity:
4.4.3.5.2 User Engagement:
4.4.3.5.3 Task Engineering:

5 Guidelines for Responsible AI
5.1 Explain the development of AI systems that are responsible.
5.1.1 Identify features of responsible AI (for example, bias, fairness, inclusivity, robustness, safety, veracity)
5.1.1.1 Bias
5.1.1.2 Fairness
5.1.1.3 Inclusivity
5.1.1.4 Robustness
5.1.1.5 Safety
5.1.1.6 Veracity (¯u¹ê©Ê)
5.1.1.7 Additional Responsible AI Concepts for Exam
5.1.2 Understand how to use tools to identify features of responsible AI (for example, Guardrails for Amazon Bedrock)
5.1.2.1 Guardrails for Amazon Bedrock
5.1.2.2 Amazon SageMaker Clarify
5.1.2.3 Logging and Monitoring Tools (e.g., AWS CloudTrail, Amazon CloudWatch, AWS Config)
5.1.2.4 Human-in-the-Loop (HITL) Approaches
5.1.3 Understand responsible practices to select a model (for example, environmental considerations, sustainability)
5.1.3.1 Evaluate Environmental Impact
5.1.3.2 Assess Infrastructure Efficiency
5.1.3.3 Weigh Model Complexity vs. Performance
5.1.3.4 Long-Term Sustainability Considerations
5.1.3.5 Exam
5.1.4 Identify legal risks of working with generative AI (for example, intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations)
5.1.4.1 Intellectual Property Infringement Claims
5.1.4.2 Biased Model Outputs
5.1.4.3 Loss of Customer Trust
5.1.4.4 End User Risk
5.1.4.5 Hallucinations
5.1.4.6 Plagiarism and Attribution Concerns
5.1.4.7 Regulatory and Compliance Risks
5.1.4.8 Data Privacy Concerns
5.1.5 Identify characteristics of datasets (for example, inclusivity, diversity, curated data sources, balanced datasets)
5.1.5.1 Inclusivity
5.1.5.2 Diversity
5.1.5.3 Curated Data Sources
5.1.5.4 Balanced Datasets
5.1.5.5 Exam
5.1.6 Understand effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting)
5.1.6.1 Bias, Underfitting
5.1.6.2 Variance, Overfitting
5.1.6.3 Tradeoff Between Bias and Variance
5.1.7 Describe tools to detect and monitor bias, trustworthiness, and truthfulness (for example, analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI [Amazon A2I])
5.1.7.1 Analyzing Label Quality
5.1.7.2 Subgroup Analysis
5.1.7.3 Human Audits
5.1.7.4 Amazon SageMaker Clarify
5.1.7.5 Amazon SageMaker Model Monitor
5.1.7.6 Amazon Augmented AI (Amazon A2I)
5.2 Recognize the importance of transparent and explainable models
5.2.1 Understand the differences between models that are transparent and explainable and models that are not transparent and explainable
5.2.1.1 Transparent and Explainable Models
5.2.1.2 Non-Transparent (Black-Box) Models
5.2.1.3 Comparison
5.2.2 Understand the tools to identify transparent and explainable models (for example, Amazon SageMaker Model Cards, open source models, data, licensing)
5.2.2.1 Amazon SageMaker Model Cards
5.2.2.2 Open Source Models
5.2.2.3 Data Documentation and Transparency
5.2.2.4 Licensing Considerations
5.2.2.5 Exam
5.2.3 Identify tradeoffs between model safety and transparency (for example, measure interpretability and performance)
5.2.3.1 Interpretability vs. Performance
5.2.3.2 Safety Measures vs. Output Diversity
5.2.3.3 Model Explainability Tools
5.2.3.4 Balancing Safety and Transparency in Deployment
5.2.4 Understand principles of human-centered design for explainable AI
5.2.4.1 User-Centric Approach
5.2.4.2 Transparency
5.2.4.3 Interactivity and Iterative Feedback
5.2.4.4 Interpretability Over Complexity
5.2.4.5 Design for Fairness and Trust
5.2.4.6 Usability and Accessibility
5.2.4.7 Contextual Grounding:

6 Security, Compliance, and Governance for AI Solutions
6.1 Explain methods to secure AI systems.
6.1.1 Identify AWS services and features to secure AI systems (for example, IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model)
6.1.1.1 AWS Identity and Access Management (IAM)
6.1.1.2 AWS Shared Responsibility Model
6.1.1.2.1 AWS's Responsibilities
6.1.1.2.2 Customer Responsibilities
6.1.1.3 Encryption
6.1.1.4 AWS PrivateLink
6.1.1.5 Amazon Macie
6.1.1.6 AWS CloudTrail
6.1.1.7 AWS Artifact & Audit Manager
6.1.1.7.1 AWS Artifact
6.1.1.7.2 AWS Audit Manager
6.1.1.7.3 Exam
6.1.2 Understand the concept of source citation and documenting data origins (for example, data lineage, data cataloging, SageMaker Model Cards)
6.1.2.1 Data Lineage
6.1.2.2 Data Cataloging
6.1.2.3 SageMaker Model Cards
6.1.3 Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity)
6.1.3.1 Assessing Data Quality
6.1.3.2 Implementing Privacy-Enhancing Technologies (PETs)
6.1.3.3 Data Access Control
6.1.3.4 Data Integrity
6.1.4 Understand security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit)
6.1.4.1 Application Security
6.1.4.2 Threat Detection and Vulnerability Management
6.1.4.3 Infrastructure Protection
6.1.4.4 Prompt Injection and Input Filtering
6.1.4.5 Encryption at Rest and In Transit
6.2 Recognize governance and compliance regulations for AI systems.
6.2.1 Identify regulatory compliance standards for AI systems (for example, International Organization for Standardization [ISO], System and Organization Controls [SOC], algorithm accountability laws)
6.2.1.1 Importance of Regulatory Compliance
6.2.1.2 International Organization for Standardization (ISO)
6.2.1.3 System and Organization Controls (SOC)
6.2.1.4 Algorithm Accountability Laws
6.2.2 Identify AWS services and features to assist with governance and regulation compliance (for example, AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor)
6.2.2.1 AWS Config
6.2.2.2 Amazon Inspector
6.2.2.3 AWS Audit Manager
6.2.2.4 AWS Artifact
6.2.2.5 AWS CloudTrail
6.2.2.6 AWS Trusted Advisor
6.2.3 Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention)
6.2.3.1 Data Lifecycles
6.2.3.2 Logging
6.2.3.3 Data Residency
6.2.3.4 Monitoring
6.2.3.5 Observation
6.2.3.6 Data Retention
6.2.4 Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements)
6.2.4.1 Policies
6.2.4.2 Review Cadence
6.2.4.3 Review Strategies
6.2.4.4 Adopt Governance Frameworks
6.2.4.4.1 Generative AI Security Scoping Matrix
6.2.4.5 Transparency Standards
6.2.4.6 Mandate Team Training and Awareness

7 AWS Certified AI Practitioner Examination
7.1 Examination details
7.2 Examination registration

8 Further Reading (for information only; outside the syllabus)
8.1 JupyterLab Notebook
8.2 Creating JupyterLab Notebook in AWS SageMaker AI
8.3 Codes (with simple explanation)
8.4 Codes (with detailed explanations)
8.4.1 Importing Libraries
8.4.2 Enabling Inline Plotting
8.4.3 Data Collection (Synthetic Data)
8.4.4 Displaying the DataFrame
8.4.5 Exploratory Data Analysis (EDA)
8.4.6 Data Pre-processing
8.4.7 Feature Engineering
8.4.8 Model Training
8.4.9 Hyperparameter Tuning
8.4.10 Evaluation
8.4.11 Deployment
8.4.12 Prediction Function
8.4.13 Example Prediction
8.4.14 Monitoring


 

§ó¦hºî¦X½Òµ{
  Äá¼v½Òµ{
  ¡E Äá¼vªì¯Å
  ¡E Äá¼v¤¤¯Å (­·´º±MÃD)
  ­^¤å½Òµ{
  ¡E IPA «÷­µ¡G¯Å§O 1 ¡E 2 ¡E 3 ¡E 4
  ´¶³q¸Ü½Òµ{
  ¡E °ò¦´¶³q¸Ü«÷­µ (§K¶O)
  ¡E ¶i¶¥´¶³q¸Ü«÷­µ
  ¡E ´¶³q¸Ü·|¸Ü¡G¯Å§O 1 ¡E 2 ¡E 3
  ¦è¯Z¤ú»y¤å½Òµ{
  ¡E ¯Å§O 1 ¡E 2 ¡E 3
  ¤¤Âå½Òµ{
  ¡E Àã¯l»P¥Ö½§±Ó·P¯f
  ¡E ·t½H»P¦â´³ | »ó±Ó·P»P·P«_
  ¡E ²æ¾v»P¥Õ¾v | ±q¤­©x¬Ý°·±d
  ­·¤ô©R²z½Òµ{
  ¡E µµ·L¤æ¼Æ¡G¯Å§O 1 ¡E 2 ¡E 3
  ¡E ¤l¥­¤K¦r¡G¯Å§O 1 ¡E 2 ¡E 3
  ¡E ¤K¦r­·¤ô¡G¯Å§O 1 ¡E 2 ¡E 3
  ¡E ©_ªù¹P¥Ò¡G¯Å§O 1 ¡E 2 ¡E 3