




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
MOOC交通數據挖掘技術(DataMiningforTransportation)-東南大學中國大學慕課答案Test11、問題:WhichoneisnotthedescriptionofDatamining?選項:A、ExtractionofinterestingpatternsorknowledgeB、Explorationsandanalysisbyautomaticorsemi-automaticmeansC、DiscovermeaningfulpatternsfromlargequantitiesofdataD、Appropriatestatisticalanalysismethodstoanalyzethedatacollected正確答案:【Appropriatestatisticalanalysismethodstoanalyzethedatacollected】2、問題:Whichonedescribestherightprocessofknowledgediscovery?選項:A、Selection-Preprocessing-Transformation-Datamining-Interpretation/EvaluationB、Preprocessing-Transformation-Datamining-Selection-Interpretation/EvaluationC、Datamining-Selection-Interpretation/Evaluation-Preprocessing-TransformationD、Transformation-Datamining-election-Preprocessing-Interpretation/Evaluation正確答案:【Selection-Preprocessing-Transformation-Datamining-Interpretation/Evaluation】3、問題:WhichoneisnotbelongtotheprocessofKDD?選項:A、DataminingB、DatadescriptionC、DatacleaningD、Dataselection正確答案:【Datadescription】4、問題:Whichoneisnottherightalternativenameofdatamining?選項:A、KnowledgeextractionB、DataarcheologyC、DatadredgingD、Dataharvesting正確答案:【Dataharvesting】5、問題:Whichoneisnotthenominalvariables?選項:A、OccupationB、EducationC、AgeD、Color正確答案:【Age】6、問題:Whichoneiswrongaboutclassificationandregression?選項:A、Regressionanalysisisastatisticalmethodologythatismostoftenusedfornumericprediction.B、Wecanconstructclassificationmodels(functions)withoutsometrainingexamples.C、Classificationpredictscategorical(discrete,unordered)labels.D、Regressionmodelspredictcontinuous-valuedfunctions.正確答案:【Wecanconstructclassificationmodels(functions)withoutsometrainingexamples.】7、問題:Whichoneiswrongaboutclusteringandoutliers?選項:A、Clusteringbelongstosupervisedlearning.B、Principlesofclusteringincludemaximizingintra-classsimilarityandminimizinginterclasssimilarity.C、Outlieranalysiscanbeusefulinfrauddetectionandrareeventsanalysis.D、Outliermeansadataobjectthatdoesnotcomplywiththegeneralbehaviorofthedata.正確答案:【Clusteringbelongstosupervisedlearning.】8、問題:Aboutdataprocess,whichoneiswrong?選項:A、Whenmakingdatadiscrimination,wecomparethetargetclasswithoneorasetofcomparativeclasses(thecontrastingclasses).B、Whenmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.C、Whenmakingdatacharacterization,wesummarizethedataoftheclassunderstudy(thetargetclass)ingeneralterms.D、Whenmakingdataclustering,wewouldgroupdatatoformnewcategories.正確答案:【Whenmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.】9、問題:Outlierminingsuchasdensitybasedmethodbelongstosupervisedlearning.選項:A、正確B、錯誤正確答案:【錯誤】10、問題:Supportvectormachinescanbeusedforclassificationandregression.選項:A、正確B、錯誤正確答案:【正確】Test21、問題:Whichisnotthereasonweneedtopreprocessthedata?選項:A、tosavetimeB、tomakeresultmeetourhypothesisC、toavoidunreliableoutputD、toeliminatenoise正確答案:【tomakeresultmeetourhypothesis】2、問題:Whichisnotthemajortasksindatapreprocessing?選項:A、CleanB、IntegrationC、TransitionD、Reduction正確答案:【Transition】3、問題:HowtoconstructnewfeaturespacebyPCA?選項:A、NewfeaturespacebyPCAisconstructedbychoosingthemostimportantfeaturesyouthink.B、NewfeaturespacebyPCAisconstructedbynormalizinginputdata.C、NewfeaturespacebyPCAisconstructedbyselectingfeaturesrandomly.D、NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.正確答案:【NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.】4、問題:Whichoneiswrongaboutmethodsfordiscretization?選項:A、HistogramanalysisandBingingarebothunsupervisedmethods.B、Clusteringanalysisonlybelongstotop-downsplit.C、Intervalmergingbyc2Analysiscanbeappliedrecursively.D、Decision-treeanalysisisEntropy-baseddiscretization.正確答案:【Clusteringanalysisonlybelongstotop-downsplit.】5、問題:WhichoneiswrongaboutEqual-width(distance)partitioningandEqual-depth(frequency)partitioning?選項:A、Equal-widthpartitioningisthemoststraightforward,butoutliersmaydominatepresentation.B、Equal-depthpartitioningdividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamples.C、Theintervaloftheformeroneisnotequal.D、Thenumberoftuplesisthesamewhenusingthelatterone.正確答案:【Theintervaloftheformeroneisnotequal.】6、問題:Whichoneiswrongwaytonormalizedata?選項:A、Min-maxnormalizationB、SimplescalingC、Z-scorenormalizationD、Normalizationbydecimalscaling正確答案:【Simplescaling】7、問題:Whicharetherightwaytofillinmissingvalues?選項:A、SmartmeanB、ProbablevalueC、IgnoreD、Falsify正確答案:【Smartmean#Probablevalue#Ignore】8、問題:Whicharetherightwaytohandlenoisedata?選項:A、RegressionB、ClusterC、WTD、Manual正確答案:【Regression#Cluster#WT#Manual】9、問題:Whichoneisrightaboutwavelettransforms?選項:A、Wavelettransformsstorelargefractionsofthestrongestofthewaveletcoefficients.B、TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.C、Wavelettransformscanbeusedforreducingdataandsmoothingdata.D、Wavelettransformsmeansapplyingtopairsofdata,resultingintwosetofdataofthesamelength.正確答案:【TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.#Wavelettransformscanbeusedforreducingdataandsmoothingdata.】10、問題:Whicharethecommonusedwaystosampling?選項:A、SimplerandomsamplewithoutreplacementB、SimplerandomsamplewithreplacementC、StratifiedsampleD、Clustersample正確答案:【Simplerandomsamplewithoutreplacement#Simplerandomsamplewithreplacement#Stratifiedsample#Clustersample】11、問題:Discretizationmeansdividingtherangeofacontinuousattributeintointervals.選項:A、正確B、錯誤正確答案:【正確】Test31、問題:What'sthedifferencebetweeneagerlearnerandlazylearner?選項:A、Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.B、Eagerlearnersclassifytheturplebasedonitssimilaritytothestoredtrainingturplewhilelazylearnernot.C、Eagerlearnerssimplystoredata(ordoesonlyalittleminorprocessing)whilelazylearnernot.D、Lazylearnerwouldgenerateamodelforclassificationwhileeagerlearnerwouldnot.正確答案:【Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.】2、問題:HowtochoosetheoptimalvalueforK?選項:A、Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.B、LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.C、Alargekvaluecanreducetheoverallnoisesothevaluefor'k'canbeasbigaspossible.D、Historically,theoptimalKformostdatasetshasbeenbetween3-10.正確答案:【Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.#LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.#Historically,theoptimalKformostdatasetshasbeenbetween3-10.】3、問題:What’sthemajorcomponentsinKNN?選項:A、Howtomeasuresimilarity?B、Howtochoosek?C、Howareclasslabelsassigned?D、Howtodecidethedistance?正確答案:【Howtomeasuresimilarity?#Howtochoosek?#Howareclasslabelsassigned?】4、問題:WhichoneofthefollowingwayscanbeusedtoobtainattributeweightforAttribute-WeightedKNN?選項:A、Priorknowledge/experience.B、PCA,FA(Factoranalysismethod).C、Informationgain.D、Gradientdescent,simplexmethodsandgeneticalgorithm.正確答案:【Priorknowledge/experience.#PCA,FA(Factoranalysismethod).#Informationgain.#Gradientdescent,simplexmethodsandgeneticalgorithm.】5、問題:AtlearningstageKNNwouldfindtheKclosestneighborsandthendecideclassifyKidentifiednearestlabel.選項:A、正確B、錯誤正確答案:【錯誤】6、問題:AtclassificationstageKNNwouldstoreallinstanceorsometypicalofthem.選項:A、正確B、錯誤正確答案:【錯誤】7、問題:Normalizingthedatacansolvetheproblemthatdifferentattributeshavedifferentvalueranges.選項:A、正確B、錯誤正確答案:【正確】8、問題:ByEuclideandistanceorManhattandistance,wecancalculatethedistancebetweentwoinstances.選項:A、正確B、錯誤正確答案:【正確】9、問題:DatanormalizationbeforeMeasureDistancecanavoiderrorscausedbydifferentdimensions,self-variations,orlargenumericaldifferences.選項:A、正確B、錯誤正確答案:【正確】10、問題:Thewaytoobtaintheregressionforanewinstancefromtheknearestneighborsistocalculatetheaveragevalueofkneighbors.選項:A、正確B、錯誤正確答案:【正確】11、問題:Thewaytoobtaintheclassificationforanewinstancefromtheknearestneighborsistocalculatethemajorityclassofkneighbors.選項:A、正確B、錯誤正確答案:【正確】12、問題:ThewaytoobtaininstanceweightforDistance-WeightedKNNistocalculatethereciprocalofthedistancesquaredbetweenobjectandneighbors.選項:A、正確B、錯誤正確答案:【正確】Test41、問題:Whichdescriptionisrightaboutnodesindecisiontree?選項:A、InternalnodestestthevalueofparticularfeaturesB、LeafnodesspecifytheclassC、BranchnodesdecidetheresultD、Rootnodesdecidethestartpoint正確答案:【Internalnodestestthevalueofparticularfeatures#Leafnodesspecifytheclass】2、問題:ComputinginformationgainforcontinuousvalueattributewhenusingID3consistsofthefollowingprocedure:選項:A、SortthevalueAinincreasingorder.B、Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.C、Selecttheminimumexpectedinformationrequirementasthesplit-point.D、Split.正確答案:【SortthevalueAinincreasingorder.#Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.#Selecttheminimumexpectedinformationrequirementasthesplit-point.#Split.】3、問題:Whichisthetypicalalgorithmstogeneratetrees?選項:A、ID3B、C4.5C、CARTD、PCA正確答案:【ID3#C4.5#CART】4、問題:Whichoneisrightaboutunderfittingandoverfitting?選項:A、Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.B、Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.C、Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.D、Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.正確答案:【Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.#Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.#Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.#Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.】5、問題:Whichoneisrightaboutpre-pruningandpost-pruning?選項:A、Bothofthemaremethodstodealwithoverfittingproblem.B、Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.C、Post-pruningremovesbranchesfroma“fullygrown”tree.D、Thereisnoneedtochooseanappropriatethresholdwhenmakingpre-pruning.正確答案:【Bothofthemaremethodstodealwithoverfittingproblem.#Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.#Post-pruningremovesbranchesfroma“fullygrown”tree.】6、問題:Post-pruninginCARTconsistsofthefollowingprocedure:選項:A、First,considerthecostcomplexityofatree.B、Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.C、AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.D、Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.正確答案:【First,considerthecostcomplexityofatree.#Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.#AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.#Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.】7、問題:ThecostcomplexitypruningalgorithmusedinCARTevaluatecostcomplexitybythenumberofleavesinthetree,andtheerrorrate.選項:A、正確B、錯誤正確答案:【正確】8、問題:GainratioisusedasattributeselectionmeasureinC4.5andtheformulaisGainRatio(A)=Gain(A)/SplitInfo(A).選項:A、正確B、錯誤正確答案:【正確】9、問題:Ruleiscreatedforeachpartfromitsroottoitsleafnotes.選項:A、正確B、錯誤正確答案:【正確】10、問題:ID3useinformationgainasitsattributeselectionmeasure.AndtheattributewiththelowestinformationgainischosenasthesplittingattributefornoteN.選項:A、正確B、錯誤正確答案:【錯誤】Test51、問題:WhatthefeatureofSVM?選項:A、Extremelyslow,butarehighlyaccurate.B、Muchlesspronetooverfittingthanothermethods.C、Blackboxmodel.D、Provideacompactdescriptionofthelearnedmodel.正確答案:【Extremelyslow,butarehighlyaccurate.#Muchlesspronetooverfittingthanothermethods.#Provideacompactdescriptionofthelearnedmodel.】2、問題:Whichisthetypicalcommonkernel?選項:A、LinearB、PolynomialC、Radialbasisfunction(Gaussiankernel)D、Sigmoidkernel正確答案:【Linear#Polynomial#Radialbasisfunction(Gaussiankernel)#Sigmoidkernel】3、問題:WhatadaptationscanbemadetoallowSVMtodealwithMulticlassClassificationproblem?選項:A、Oneversusrest(OVR).B、Oneversusone(OVO).C、Errorcorrectinginputcodes(ECIC).D、Errorcorrectingoutputcodes(ECOC).正確答案:【Oneversusrest(OVR).#Oneversusone(OVO).#Errorcorrectingoutputcodes(ECOC).】4、問題:What'stheproblemofOVR?選項:A、Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.B、Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.C、Thebinaryclassificationlearnersseeunbalanceddistributions.D、Onlywhentheclassdistributionisbalancedcanbalanceddistributionsattain.正確答案:【Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.#Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.#Thebinaryclassificationlearnersseeunbalanceddistributions.】5、問題:WhichoneisrightabouttheadvantagesofSVM?選項:A、Theyareaccurateinhigh-dimensionalspaces.B、Theyarememoryefficient.C、Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.D、Thesupportvectorsaretheessentialorcriticaltrainingtuples.正確答案:【Theyareaccurateinhigh-dimensionalspaces.#Theyarememoryefficient.#Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.#Thesupportvectorsaretheessentialorcriticaltrainingtuples.】6、問題:Kerneltrickwasusedtoavoidcostlycomputationanddealwithmappingproblems.選項:A、正確B、錯誤正確答案:【正確】7、問題:ThereisnostructuredwayandnogoldenrulesforsettingtheparametersinSVM.選項:A、正確B、錯誤正確答案:【正確】8、問題:Errorcorrectingoutputcodes(ECOC)isakindofproblemtransformationtechniques.選項:A、正確B、錯誤正確答案:【錯誤】9、問題:Regressionformulasincludingthreetypes:linear,nonlinearandgeneralform.選項:A、正確B、錯誤正確答案:【正確】10、問題:Ifyouhaveabigdataset,SVMissuitableforefficientcomputation.選項:A、正確B、錯誤正確答案:【錯誤】Test61、問題:Whichdescriptionisrighttodescribeoutliers?選項:A、OutlierscausedbymeasurementerrorB、OutliersreflectinggroundtruthC、OutlierscausedbyequipmentfailureD、Outliersneededtobedroppedoutalways正確答案:【Outlierscausedbymeasurementerror#Outliersreflectinggroundtruth#Outlierscausedbyequipmentfailure】2、問題:Whatisapplicationcaseofoutliermining?選項:A、TrafficincidentdetectionB、CreditcardfrauddetectionC、NetworkintrusiondetectionD、Medicalanalysis正確答案:【Trafficincidentdetection#Creditcardfrauddetection#Networkintrusiondetection#Medicalanalysis】3、問題:Whichoneisthemethodtodetectoutliers?選項:A、Statistics-basedapproachB、Distance-basedapproachC、Bulk-basedapproachD、Density-basedapproach正確答案:【Statistics-basedapproach#Distance-basedapproach#Density-basedapproach】4、問題:Howtopicktherightkbyaheuristicmethodfordensity-basedoutlierminingmethod?選項:A、Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.B、Pick10to20appearstoworkwellingeneral.C、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybeglobaloutliers.D、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.正確答案:【Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.#Pick10to20appearstoworkwellingeneral.#Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.】5、問題:Whichoneisrightaboutthreemethodsofoutliermining?選項:A、Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.B、Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.C、Distance-basedapproachcannotbeusedinmultidimensionaldataset.D、Density-basedapproachspendslowcostonsearchingneighborhood.正確答案:【Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.#Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.】6、問題:Distance-basedoutlierMiningisnotsuitabletodatasetthatdoesnotfitanystandarddistributionmodel.選項:A、正確B、錯誤正確答案:【錯誤】7、問題:Statistic-basedmethodneedstorequireknowingthedistributionofthedataandthedistributionparametersinadvance.選項:A、正確B、錯誤正確答案:【正確】8、問題:Whenidentifyingoutlierswithadiscordancytest,thedatapointisconsideredasanoutlierifitfallswithintheconfidenceinterval.選項:A、正確B、錯誤正確答案:【錯誤】9、問題:MahalanobisDistanceaccountsfortherelativedispersionsandinherentcorrelationsamongvectorelements,whichisdifferentfromEuclideanDistance.選項:A、正確B、錯誤正確答案:【正確】10、問題:Anoutlierisadataobjectthatdeviatessignificantlyfromtherestoftheobjects,asifitweregeneratedbyadifferentmechanism.選項:A、正確B、錯誤正確答案:【正確】Test71、問題:Howtodealwithimbalanceddatain2-classclassification?選項:A、OversamplingB、UndersamplingC、Threshold-movingD、Ensembletechniques正確答案:【Oversampling#Undersampling#Threshold-moving#Ensembletechniques】2、問題:Whichoneisrightwhendealingwiththeclass-imbalanceproblem?選項:A、Oversamplingworksbydecreasingthenumberofminoritypositivetuples.B、Undersamplingworksbyincreasingthenumberofmajoritynegativetuples.C、Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.D、Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.正確答案:【Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.#Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.】3、問題:Whichstepisnecessarywhenconstructinganensemblemodel?選項:A、CreatingmultipledatasetB、ConstructingasetofclassifiersfromthetrainingdataC、CombiningpredictionsmadebymultipleclassifierstoobtainfinalclasslabelD、Findthebestperformingpredictionstoobtainfinalclasslabel正確答案:【Creatingmultipledataset#Constructingasetofclassifiersfromthetrainingdata#Combiningpredictionsmadebymultipleclassifierstoobtainfinalclass
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 軟件設計師考試行業動態試題及答案
- 西方政府與市場的關系試題及答案
- 電影AI應用行業深度調研及發展項目商業計劃書
- 海外游學與夏令營行業深度調研及發展項目商業計劃書
- 民間工藝博物館企業制定與實施新質生產力項目商業計劃書
- 環保教具清潔保養企業制定與實施新質生產力項目商業計劃書
- 民俗文化主題公園行業深度調研及發展項目商業計劃書
- 校園足球教練培訓班行業跨境出海項目商業計劃書
- 雜技表演直播平臺行業跨境出海項目商業計劃書
- 民俗文化村行業深度調研及發展項目商業計劃書
- 財經基礎知識與技能試卷
- 醫院電子病歷系統維護制度
- 有害物質過程管理系統HSPM培訓教材
- 國家職業技術技能標準 X2-10-07-17 陶瓷產品設計師(試行)勞社廳發200633號
- 深圳醫院質子重離子治療中心項目可行性研究報告
- 廣東省廣州市2024年中考數學真題試卷(含答案)
- 我國的生產資料所有制
- 2024年上海市黃浦區四年級數學第一學期期末學業水平測試試題含解析
- 初中數學《相似三角形》壓軸30題含解析
- 2024年海南省中考數學試題卷(含答案解析)
- 云南省食品安全管理制度
評論
0/150
提交評論