




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
PromptEngineering
Author:LeeBoonstra
PromptEngineering
Acknowledgements
Contentcontributors
MichaelShermanYuanCao
ErickArmbrustAnantNawalgariaAntonioGulliSimoneCammel
CuratorsandEditors
AntonioGulliAnantNawalgariaGraceMollison
TechnicalWriter
JoeyHaymaker
Designer
MichaelLanning
February2025 2
Tableofcontents
Introduction 6
Promptengineering 7
LLMoutputconfiguration 8
Outputlength 8
Samplingcontrols 9
Temperature 9
Top-Kandtop-P 10
Puttingitalltogether 11
Promptingtechniques 13
Generalprompting/zeroshot 13
One-shot&few-shot 15
System,contextualandroleprompting 18
Systemprompting 19
Roleprompting 21
Contextualprompting 23
Step-backprompting 25
ChainofThought(CoT) 29
Self-consistency 32
TreeofThoughts(ToT) 36
ReAct(reason&act) 37
AutomaticPromptEngineering 40
Codeprompting 42
Promptsforwritingcode 42
Promptsforexplainingcode 44
Promptsfortranslatingcode 46
Promptsfordebuggingandreviewingcode 48
Whataboutmultimodalprompting? 54
BestPractices 54
Provideexamples 54
Designwithsimplicity 55
Bespecificabouttheoutput 56
UseInstructionsoverConstraints 56
Controlthemaxtokenlength 58
Usevariablesinprompts 58
Experimentwithinputformatsandwritingstyles 59
Forfew-shotpromptingwithclassificationtasks,mixuptheclasses 59
Adapttomodelupdates 60
Experimentwithoutputformats 60
JSONRepair 61
WorkingwithSchemas 62
Experimenttogetherwithotherpromptengineers 63
CoTBestpractices 64
Documentthevariouspromptattempts 64
Summary 66
Endnotes 68
PromptEngineering
February2025
PAGE
20
Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.
Introduction
Whenthinkingaboutalargelanguagemodelinputandoutput,atextprompt(sometimesaccompaniedbyothermodalitiessuchasimageprompts)istheinputthemodeluses
topredictaspecificoutput.Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.However,craftingthemosteffectivepromptcanbecomplicated.Manyaspectsofyourpromptaffectitsefficacy:themodelyouuse,themodel’strainingdata,themodelconfigurations,yourword-choice,styleandtone,structure,andcontextallmatter.Therefore,promptengineeringisaniterativeprocess.Inadequatepromptscanleadtoambiguous,inaccurateresponses,andcanhinderthemodel’sabilitytoprovidemeaningfuloutput.
WhenyouchatwiththeGeminichatbot,1youbasicallywriteprompts,howeverthiswhitepaperfocusesonwritingpromptsfortheGeminimodelwithinVertexAIorbyusingtheAPI,becausebypromptingthemodeldirectlyyouwillhaveaccesstotheconfigurationsuchastemperatureetc.
Thiswhitepaperdiscussespromptengineeringindetail.Wewilllookintothevariouspromptingtechniquestohelpyougettingstartedandsharetipsandbestpracticestobecomeapromptingexpert.Wewillalsodiscusssomeofthechallengesyoucanfacewhilecraftingprompts.
Promptengineering
RememberhowanLLMworks;it’sapredictionengine.Themodeltakessequentialtextasaninputandthenpredictswhatthefollowingtokenshouldbe,basedonthedataitwastrainedon.TheLLMisoperationalizedtodothisoverandoveragain,addingthepreviouslypredictedtokentotheendofthesequentialtextforpredictingthefollowingtoken.Thenext
tokenpredictionisbasedontherelationshipbetweenwhat’sintheprevioustokensandwhattheLLMhasseenduringitstraining.
Whenyouwriteaprompt,youareattemptingtosetuptheLLMtopredicttherightsequenceoftokens.Promptengineeringistheprocessofdesigninghigh-qualitypromptsthatguideLLMstoproduceaccurateoutputs.Thisprocessinvolvestinkeringtofindthebestprompt,optimizingpromptlength,andevaluatingaprompt’swritingstyleandstructureinrelation
tothetask.InthecontextofnaturallanguageprocessingandLLMs,apromptisaninputprovidedtothemodeltogeneratearesponseorprediction.
Thesepromptscanbeusedtoachievevariouskindsofunderstandingandgenerationtaskssuchastextsummarization,informationextraction,questionandanswering,textclassification,languageorcodetranslation,codegeneration,andcodedocumentationorreasoning.
PleasefeelfreetorefertoGoogle’spromptingguides2,3withsimpleandeffectivepromptingexamples.
Whenpromptengineering,youwillstartbychoosingamodel.Promptsmightneedtobeoptimizedforyourspecificmodel,regardlessofwhetheryouuseGeminilanguagemodelsinVertexAI,GPT,Claude,oranopensourcemodellikeGemmaorLLaMA.
Besidestheprompt,youwillalsoneedtotinkerwiththevariousconfigurationsofaLLM.
LLMoutputconfiguration
Onceyouchooseyourmodelyouwillneedtofigureoutthemodelconfiguration.MostLLMscomewithvariousconfigurationoptionsthatcontroltheLLM’soutput.Effectivepromptengineeringrequiressettingtheseconfigurationsoptimallyforyourtask.
Outputlength
Animportantconfigurationsettingisthenumberoftokenstogenerateinaresponse.GeneratingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumption,potentiallyslowerresponsetimes,andhighercosts.
ReducingtheoutputlengthoftheLLMdoesn’tcausetheLLMtobecomemorestylisticallyortextuallysuccinctintheoutputitcreates,itjustcausestheLLMtostoppredictingmoretokensoncethelimitisreached.Ifyourneedsrequireashortoutputlength,you’llalsopossiblyneedtoengineeryourprompttoaccommodate.
OutputlengthrestrictionisespeciallyimportantforsomeLLMpromptingtechniques,likeReAct,wheretheLLMwillkeepemittinguselesstokensaftertheresponseyouwant.
Beaware,generatingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumptionandpotentiallyslowerresponsetimes,whichleadstohighercosts.
Samplingcontrols
LLMsdonotformallypredictasingletoken.Rather,LLMspredictprobabilitiesforwhatthenexttokencouldbe,witheachtokenintheLLM’svocabularygettingaprobability.Thosetokenprobabilitiesarethensampledtodeterminewhatthenextproducedtokenwillbe.
Temperature,top-K,andtop-Parethemostcommonconfigurationsettingsthatdeterminehowpredictedtokenprobabilitiesareprocessedtochooseasingleoutputtoken.
Temperature
Temperaturecontrolsthedegreeofrandomnessintokenselection.Lowertemperaturesaregoodforpromptsthatexpectamoredeterministicresponse,whilehighertemperaturescanleadtomorediverseorunexpectedresults.Atemperatureof0(greedydecoding)is
deterministic:thehighestprobabilitytokenisalwaysselected(thoughnotethatiftwotokenshavethesamehighestpredictedprobability,dependingonhowtiebreakingisimplementedyoumaynotalwaysgetthesameoutputwithtemperature0).
Temperaturesclosetothemaxtendtocreatemorerandomoutput.Andastemperaturegetshigherandhigher,alltokensbecomeequallylikelytobethenextpredictedtoken.
TheGeminitemperaturecontrolcanbeunderstoodinasimilarwaytothesoftmaxfunctionusedinmachinelearning.Alowtemperaturesettingmirrorsalowsoftmaxtemperature(T),emphasizingasingle,preferredtemperaturewithhighcertainty.AhigherGeminitemperaturesettingislikeahighsoftmaxtemperature,makingawiderrangeoftemperaturesaround
theselectedsettingmoreacceptable.Thisincreaseduncertaintyaccommodatesscenarioswherearigid,precisetemperaturemaynotbeessentiallikeforexamplewhenexperimentingwithcreativeoutputs.
Top-Kandtop-P
Top-Kandtop-P(alsoknownasnucleussampling)4aretwosamplingsettingsusedinLLMstorestrictthepredictednexttokentocomefromtokenswiththetoppredictedprobabilities.Liketemperature,thesesamplingsettingscontroltherandomnessanddiversityofgeneratedtext.
Top-KsamplingselectsthetopKmostlikelytokensfromthemodel’spredicteddistribution.Thehighertop-K,themorecreativeandvariedthemodel’soutput;thelowertop-K,themorerestiveandfactualthemodel’soutput.Atop-Kof1isequivalenttogreedydecoding.
Top-Psamplingselectsthetoptokenswhosecumulativeprobabilitydoesnotexceedacertainvalue(P).ValuesforPrangefrom0(greedydecoding)to1(alltokensintheLLM’svocabulary).
Thebestwaytochoosebetweentop-Kandtop-Pistoexperimentwithbothmethods(orbothtogether)andseewhichoneproducestheresultsyouarelookingfor.
Puttingitalltogether
Choosingbetweentop-K,top-P,temperature,andthenumberoftokenstogenerate,dependsonthespecificapplicationanddesiredoutcome,andthesettingsallimpactoneanother.It’salsoimportanttomakesureyouunderstandhowyourchosenmodelcombinesthedifferentsamplingsettingstogether.
Iftemperature,top-K,andtop-Pareallavailable(asinVertexStudio),tokensthatmeetboththetop-Kandtop-Pcriteriaarecandidatesforthenextpredictedtoken,andthen
temperatureisappliedtosamplefromthetokensthatpassedthetop-Kandtop-Pcriteria.Ifonlytop-Kortop-Pisavailable,thebehavioristhesamebutonlytheonetop-KorPsettingisused.
Iftemperatureisnotavailable,whatevertokensmeetthetop-Kand/ortop-Pcriteriaarethenrandomlyselectedfromtoproduceasinglenextpredictedtoken.
Atextremesettingsofonesamplingconfigurationvalue,thatonesamplingsettingeithercancelsoutotherconfigurationsettingsorbecomesirrelevant.
Ifyousettemperatureto0,top-Kandtop-Pbecomeirrelevant–themostprobabletokenbecomesthenexttokenpredicted.Ifyousettemperatureextremelyhigh(above1–generallyintothe10s),temperaturebecomesirrelevantandwhatevertokensmake
itthroughthetop-Kand/ortop-Pcriteriaarethenrandomlysampledtochooseanextpredictedtoken.
Ifyousettop-Kto1,temperatureandtop-Pbecomeirrelevant.Onlyonetokenpassesthetop-Kcriteria,andthattokenisthenextpredictedtoken.Ifyousettop-Kextremelyhigh,liketothesizeoftheLLM’svocabulary,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Kcriteriaandnoneareselectedout.
Ifyousettop-Pto0(oraverysmallvalue),mostLLMsamplingimplementationswillthenonlyconsiderthemostprobabletokentomeetthetop-Pcriteria,makingtemperatureandtop-Kirrelevant.Ifyousettop-Pto1,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Pcriteria,andnoneareselectedout.
Asageneralstartingpoint,atemperatureof.2,top-Pof.95,andtop-Kof30willgiveyourelativelycoherentresultsthatcanbecreativebutnotexcessivelyso.Ifyouwantespeciallycreativeresults,trystartingwithatemperatureof.9,top-Pof.99,andtop-Kof40.Andifyouwantlesscreativeresults,trystartingwithatemperatureof.1,top-Pof.9,andtop-Kof20.
Finally,ifyourtaskalwayshasasinglecorrectanswer(e.g.,answeringamathproblem),startwithatemperatureof0.
NOTE:Withmorefreedom(highertemperature,top-K,top-P,andoutputtokens),theLLMmightgeneratetextthatislessrelevant.
WARNING:Haveyoueverseenaresponseendingwithalargeamountoffillerwords?Thisisalsoknownasthe"repetitionloopbug",whichisacommonissueinLargeLanguageModelswherethemodelgetsstuckinacycle,repeatedlygeneratingthesame(filler)word,phrase,orsentencestructure,oftenexacerbatedbyinappropriatetemperatureandtop-k/
top-psettings.Thiscanoccuratbothlowandhightemperaturesettings,thoughfordifferentreasons.Atlowtemperatures,themodelbecomesoverlydeterministic,stickingrigidlytothehighestprobabilitypath,whichcanleadtoaloopifthatpathrevisitspreviouslygeneratedtext.Conversely,athightemperatures,themodel'soutputbecomesexcessivelyrandom,increasingtheprobabilitythatarandomlychosenwordorphrasewill,bychance,leadbacktoapriorstate,creatingaloopduetothevastnumberofavailableoptions.Inbothcases,themodel'ssamplingprocessgets"stuck,"resultinginmonotonousandunhelpfuloutputuntiltheoutputwindowisfilled.Solvingthisoftenrequirescarefultinkeringwithtemperatureandtop-k/top-pvaluestofindtheoptimalbalancebetweendeterminismandrandomness.
Promptingtechniques
LLMsaretunedtofollowinstructionsandaretrainedonlargeamountsofdatasotheycanunderstandapromptandgenerateananswer.ButLLMsaren’tperfect;thecleareryourprompttext,thebetteritisfortheLLMtopredictthenextlikelytext.Additionally,specifictechniquesthattakeadvantageofhowLLMsaretrainedandhowLLMsworkwillhelpyougettherelevantresultsfromLLMs
Nowthatweunderstandwhatpromptengineeringisandwhatittakes,let’sdiveintosomeexamplesofthemostimportantpromptingtechniques.
Generalprompting/zeroshot
Azero-shot5promptisthesimplesttypeofprompt.ItonlyprovidesadescriptionofataskandsometextfortheLLMtogetstartedwith.Thisinputcouldbeanything:aquestion,astartofastory,orinstructions.Thenamezero-shotstandsfor’noexamples’.
Let’suseVertexAIStudio(forLanguage)inVertexAI,6whichprovidesaplaygroundtotestprompts.InTable1,youwillseeanexamplezero-shotprompttoclassifymoviereviews.
Thetableformatasusedbelowisagreatwayofdocumentingprompts.Yourpromptswilllikelygothroughmanyiterationsbeforetheyendupinacodebase,soit’simportanttokeeptrackofyourpromptengineeringworkinadisciplined,structuredway.Moreonthistableformat,theimportanceoftrackingpromptengineeringwork,andthepromptdevelopmentprocessisintheBestPracticessectionlaterinthischapter(“Documentthevariouspromptattempts”).
Themodeltemperatureshouldbesettoalownumber,sincenocreativityisneeded,andweusethegemini-prodefaulttop-Kandtop-Pvalues,whicheffectivelydisablebothsettings(see‘LLMOutputConfiguration’above).Payattentiontothegeneratedoutput.Thewordsdisturbingandmasterpieceshouldmakethepredictionalittlemorecomplicated,asbothwordsareusedinthesamesentence.
NameGoalModel
TemperatureTop-KPrompt
1_1_movie_classification
Classifymoviereviewsaspositive,neutralornegative.gemini-pro
0.1 TokenLimit 5
N/A Top-P 1
ClassifymoviereviewsasPOSITIVE,NEUTRALorNEGATIVE.Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.Iwishthereweremoremovieslikethismasterpiece.Sentiment:
Output POSITIVE
Table1.Anexampleofzero-shotprompting
Whenzero-shotdoesn’twork,youcanprovidedemonstrationsorexamplesintheprompt,whichleadsto“one-shot”and“few-shot”prompting.Generalprompting/zeroshot
One-shot&few-shot
WhencreatingpromptsforAImodels,itishelpfultoprovideexamples.Theseexamplescanhelpthemodelunderstandwhatyouareaskingfor.Examplesareespeciallyusefulwhenyouwanttosteerthemodeltoacertainoutputstructureorpattern.
Aone-shotprompt,providesasingleexample,hencethenameone-shot.Theideaisthemodelhasanexampleitcanimitatetobestcompletethetask.
Afew-shotprompt7providesmultipleexamplestothemodel.Thisapproachshowsthemodelapatternthatitneedstofollow.Theideaissimilartoone-shot,butmultipleexamplesofthedesiredpatternincreasesthechancethemodelfollowsthepattern.
Thenumberofexamplesyouneedforfew-shotpromptingdependsonafewfactors,includingthecomplexityofthetask,thequalityoftheexamples,andthecapabilitiesofthegenerativeAI(genAI)modelyouareusing.Asageneralruleofthumb,youshoulduseatleastthreetofiveexamplesforfew-shotprompting.However,youmayneedtousemoreexamplesformorecomplextasks,oryoumayneedtousefewerduetotheinputlengthlimitationofyourmodel.
Table2showsafew-shotpromptexample,let’susethesamegemini-promodelconfigurationsettingsasbefore,otherthanincreasingthetokenlimittoaccommodatetheneedforalongerresponse.
Goal
ParsepizzaorderstoJSON
Prompt
Parseacustomer'spizzaorderintovalidJSON:
EXAMPLE:
Iwantasmallpizzawithcheese,tomatosauce,andpepperoni.JSONResponse:
```
{
"size":"small",
"type":"normal",
"ingredients":[["cheese","tomatosauce","peperoni"]]
}
```
Continuesnextpage...
1
Top-P
N/A
Top-K
250
TokenLimit
0.1
Temperature
gemini-pro
Model
Prompt
EXAMPLE:
CanIgetalargepizzawithtomatosauce,basilandmozzarella
{
"size":"large",
"type":"normal",
"ingredients":[["tomatosauce","bazel","mozzarella"]]
}
Now,Iwouldlikealargepizza,withthefirsthalfcheeseandmozzarella.Andtheothertomatosauce,hamandpineapple.
JSONResponse:
Output
```
{
"size":"large",
"type":"half-half",
"ingredients":[["cheese","mozzarella"],["tomatosauce","ham","pineapple"]]
}
```
Table2.Anexampleoffew-shotprompting
Whenyouchooseexamplesforyourprompt,useexamplesthatarerelevanttothetaskyouwanttoperform.Theexamplesshouldbediverse,ofhighquality,andwellwritten.Onesmallmistakecanconfusethemodelandwillresultinundesiredoutput.
Ifyouaretryingtogenerateoutputthatisrobusttoavarietyofinputs,thenitisimportanttoincludeedgecasesinyourexamples.Edgecasesareinputsthatareunusualorunexpected,butthatthemodelshouldstillbeabletohandle.
System,contextualandroleprompting
System,contextualandrolepromptingarealltechniquesusedtoguidehowLLMsgeneratetext,buttheyfocusondifferentaspects:
Systempromptingsetstheoverallcontextandpurposeforthelanguagemodel.Itdefinesthe‘bigpicture’ofwhatthemodelshouldbedoing,liketranslatingalanguage,classifyingareviewetc.
Contextualpromptingprovidesspecificdetailsorbackgroundinformationrelevanttothecurrentconversationortask.Ithelpsthemodeltounderstandthenuancesofwhat’sbeingaskedandtailortheresponseaccordingly.
Rolepromptingassignsaspecificcharacteroridentityforthelanguagemodeltoadopt.Thishelpsthemodelgenerateresponsesthatareconsistentwiththeassignedroleanditsassociatedknowledgeandbehavior.
Therecanbeconsiderableoverlapbetweensystem,contextual,androleprompting.E.g.apromptthatassignsaroletothesystem,canalsohaveacontext.
However,eachtypeofpromptservesaslightlydifferentprimarypurpose:
Systemprompt:Definesthemodel’sfundamentalcapabilitiesandoverarchingpurpose.
Contextualprompt:Providesimmediate,task-specificinformationtoguidetheresponse.It’shighlyspecifictothecurrenttaskorinput,whichisdynamic.
Roleprompt:Framesthemodel’soutputstyleandvoice.Itaddsalayerofspecificityandpersonality.
Distinguishingbetweensystem,contextual,androlepromptsprovidesaframeworkfordesigningpromptswithclearintent,allowingforflexiblecombinationsandmakingiteasiertoanalyzehoweachprompttypeinfluencesthelanguagemodel’soutput.
Let’sdiveintothesethreedifferentkindsofprompts.
Systemprompting
Table3containsasystemprompt,whereIspecifyadditionalinformationonhowtoreturntheoutput.Iincreasedthetemperaturetogetahighercreativitylevel,andIspecifiedahighertokenlimit.However,becauseofmyclearinstructiononhowtoreturntheoutputthemodeldidn’treturnextratext.
Goal
Classifymoviereviewsaspositive,neutralornegative.
Output
NEGATIVE
Classifymoviereviewsaspositive,neutralornegative.Onlyreturnthelabelinuppercase.
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Sentiment:
Prompt
0.8
Top-P
40
Top-K
5
TokenLimit
1
Temperature
gemini-pro
Model
Table3.Anexampleofsystemprompting
Systempromptscanbeusefulforgeneratingoutputthatmeetsspecificrequirements.Thename‘systemprompt’actuallystandsfor‘providinganadditionaltasktothesystem’.Forexample,youcoulduseasystemprompttogenerateacodesnippetthatiscompatiblewithaspecificprogramminglanguage,oryoucoulduseasystemprompttoreturnacertainstructure.HavealookintoTable4,whereIreturntheoutputinJSONformat.
Goal
Classifymoviereviewsaspositive,neutralornegative,returnJSON.
Output
```
{
"movie_reviews":[
{
"sentiment":"NEGATIVE","name":"Her"
}
]
}
```
Classifymoviereviewsaspositive,neutralornegative.ReturnvalidJSON:
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Schema:
```MOVIE:
{
"sentiment":String"POSITIVE"|"NEGATIVE"|"NEUTRAL","name":String
}
MOVIEREVIEWS:
{
"movie_reviews":[MOVIE]
}
```
JSONResponse:
Prompt
0.8
Top-P
40
Top-K
1024
TokenLimit
1
Temperature
gemini-pro
Model
Table4.AnexampleofsystempromptingwithJSONformat
TherearesomebenefitsinreturningJSONobjectsfromapromptthatextractsdata.Inareal-worldapplicationIdon’tneedtomanuallycreatethisJSONformat,Icanalready
returnthedatainasortedorder(veryhandywhenworkingwithdatetimeobjects),butmostimportantly,bypromptingforaJSONformatitforcesthemodeltocreateastructureandlimithallucinations.
Systempromptscanalsobereallyusefulforsafetyandtoxicity.Tocontroltheoutput,simplyaddanadditionallinetoyourpromptlike:‘Youshouldberespectfulinyouranswer.’.
Roleprompting
RolepromptingisatechniqueinpromptengineeringthatinvolvesassigningaspecificroletothegenAImodel.Thiscanhelpthemodeltogeneratemorerelevantandinformativeoutput,asthemodelcancraftitsresponsestothespecificrolethatithasbeenassigned.
Forexample,youcouldrolepromptagenAImodeltobeabookeditor,akindergartenteacher,oramotivationalspeaker.Oncethemodelhasbeenassignedarole,youcanthengiveitpromptsthatarespecifictothatrole.Forexample,youcouldpromptateachertocreatealessonplanthattheycanthenreview.HavealookintoTable5,wherethemodeltakestheroleofatravelguide.
GoalModel
TemperatureTop-KPrompt
Actastravelguideandprovide3travelsuggestionsgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearme.Insomecases,IwillalsogiveyouthetypeofplacesIwillvisit.
Mysuggestion:"IaminAmsterdamandIwanttovisitonlymuseums."
TravelSuggestions:
Output
Rijksmuseum:World-renownedmuseum,knownforDutchmasterpieceslikeRembrandt’s“TheNightWatch.”Don’tmisstheincrediblecollectionofpaintingsandartifactsshowcasingDutchartandhistory.
VanGoghMuseum:ImmerseyourselfinthecaptivatingworksofVincentvanGogh.Themuseumhousesthelargestcollectionofhispaintingsanddrawings,including“TheStarryNight”and“Sunflowers.”
StedelijkMuseumAmsterdam:Discovermodernandcontemporaryartfromaroundtheworld.Themuseum’scollectionincludesworksbyPicasso,Kandinsky,andRietveld,housedinastrikingmodernbuilding.
Table5.Anexampleofroleprompting
Theaboveexampleshowsanexampleoftakingtheroleofatravelagent.Whenyouwouldchangetheroletoateacherofgeography,youwouldnoticethatyouwillreceiveadifferentresponse.
DefiningaroleperspectiveforanAImodelgivesitablueprintofthetone,style,andfocusedexpertiseyou’relookingfortoimprovethequality,relevance,andeffectivenessofyouroutput.
HerearesomestylesyoucanchoosefromwhichIfindeffective:
Confrontational,Descriptive,Direct,Formal,Humorous,Influential,Informal,Inspirational,Persuasive
Let’schangeourpromptinTable6toincludeahumorousandinspirationalstyle.
Prompt
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearmeinahumorousstyle.
Mysuggestion:"IaminManhattan."TravelSuggestions:
Output
BeholdtheEmpireStateofMind:AscendtothedizzyingheightsoftheEmpireStateBuildingandbaskinthegloryofManhattan'sskyline.PreparetofeellikeKingKongatoptheBigApple,minusthegiantape-sizedbanana.
GetArtsy-FartsyatMoMA:UnleashyourinnerartaficionadoattheMuseumofModernArt(MoMA).Gazeuponmasterpiecesthatwillboggleyour
mindandmakeyouquestionwhetheryourstick-figuredrawingshaveanyartisticmerit.
Shop‘TilYouDroponFifthAvenue:IndulgeinsomeretailtherapyontheiconicFifthAvenue.Braceyourselfforstickershockasyouwindow-shop(oractuallyshop)atdesignerboutiquesthatwillmakeyourwalletcry.Buthey,you’reinManhattan,soyoumightaswellembracethefabulousness!
Table6.Anexampleofrolepromptingwithahumoroustoneandstyle
Contextualprompting
Byprovidingcontextualprompts,youcanhelpensurethatyourAIinteractionsareasseamlessandefficientaspossible.Themodelwillbeabletomorequicklyunderstandyourrequestandbeabletogeneratemoreaccurateandrelevantresponses,asyoucanseeintheexampleofTable7.
GoalModel
TemperatureTop-KPrompt
Output
Suggestarticlesforablogaboutretrogamesgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Context:Youarewritingforablogaboutretro80'sarcadevideogames.
Suggest3topicstowriteanarticleaboutwithafewlinesofdescriptionofwhatthisarticleshouldcontain.
**TheEvolutionofArcadeCabinetDesign:**
Thisarticlewouldexploretheevolutionofarcadecabinetdesigns,fromtheearlywoodandmetalcabinetsofthe1970stothesleek,neon-litdesignsofthe1980sandbeyond.
**BlastFromThePast:IconicArcadeGamesofThe80’s:**
Thisarticlewouldfeatureaglimpseintothehistory,impact,andlegacyofsomeofthemosticonicandinfluentialarcadegamesofthe1980s,highlightingtheirinnovations,popularmechanics,andenduringcharm.
**TheRiseandRetroRevivalofPixelArt:**
Thisarticlewoulddelveintotheevolutionofpixelartasadefiningvisualstyleofearlyarcadegames,tracingitsroots,exploringitstechniques,anddiscussingtheresurgenceofpixelartinmoderngamesanddigitalart.
Table7.Anexampleofcontextualprom
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 跨國(guó)商標(biāo)保護(hù)與維權(quán)合作協(xié)議
- 2025屆小學(xué)畢業(yè)典禮主持詞-剩下的話留給盛夏
- 生物制藥技術(shù)秘密保護(hù)與市場(chǎng)調(diào)研合作協(xié)議
- 網(wǎng)絡(luò)小說改編影視作品優(yōu)先權(quán)轉(zhuǎn)讓協(xié)議
- 網(wǎng)絡(luò)文學(xué)版權(quán)注冊(cè)處租賃與著作維護(hù)協(xié)議書
- 高效節(jié)能工業(yè)設(shè)備租賃融資擔(dān)保服務(wù)協(xié)議
- 船舶儀器校準(zhǔn)檢測(cè)與航行安全協(xié)議
- 生物樣本庫建設(shè)與人才培養(yǎng)合作協(xié)議
- 快遞業(yè)員工協(xié)議書
- 自駕去西藏免責(zé)協(xié)議書
- 基于“生活教育”理念下部編小學(xué)語文教材中“小練筆”教學(xué)策略研究 論文
- 高中生物必修一實(shí)驗(yàn)通知單
- 課件:第四章 社會(huì)工作項(xiàng)目的執(zhí)行(《社會(huì)工作項(xiàng)目策劃與評(píng)估》課程)
- 冷庫施工組織設(shè)計(jì)施工方案
- 咯血診斷與治療課件
- 醫(yī)學(xué)影像專業(yè)個(gè)人簡(jiǎn)歷
- 檢驗(yàn)科 醫(yī)院感染管理質(zhì)量督查評(píng)分表
- 獨(dú)立性檢驗(yàn) 公開課比賽一等獎(jiǎng)-完整版獲獎(jiǎng)?wù)n件
- 網(wǎng)絡(luò)信息系統(tǒng)癱瘓演練PDCA改進(jìn)
- 高分子材料成型加工基礎(chǔ)添加劑及配方設(shè)計(jì)課件
- 水泥水化熱實(shí)驗(yàn)原始記錄
評(píng)論
0/150
提交評(píng)論