您的当前位置:首页正文

高速铁路外文资料44

来源:九壹网
ExpertSystemswithApplications

ExpertSystemswithApplications33(2007)86–95

www.elsevier.com/locate/eswa

Semantic-basedfacialexpressionrecognition

usinganalyticalhierarchyprocess

Shyi-ChyiCheng

aba,*,Ming-YaoChenb,Hong-YiChangb,Tzu-ChuanChou

cDepartmentofComputerScience,NationalTaiwanOceanUniversity,Taiwan

DepartmentofComputerandCommunicationEngineering,NationalKaohsiungFirstUniversityofScienceandTechnology,Taiwan

cDepartmentofInformationManagement,NationalTaiwanUniversityofScienceandTechnology,Taiwan

Abstract

Inthispaperwepresentanautomaticfacialexpressionrecognitionsystemthatutilizesasemantic-basedlearningalgorithmusingtheanalyticalhierarchyprocess(AHP).Alltheautomaticfacialexpressionrecognitionmethodsaresimilarinthattheyfirstextractsomelow-levelfeaturesfromtheimagesorvideo,thenthesefeaturesareusedasinputsintoaclassificationsystem,andtheoutcomeisoneofthepreselectedemotioncategories.Althoughtheeffectivenessoflow-levelfeaturesinautomaticfacialexpressionrecognitionsys-temshasbeenwidelystudied,thesuccessisshadowedbytheinnatediscrepancybetweenthemachineandhumanperceptiontotheimage.Thegapbetweenlow-levelvisualfeaturesandhigh-levelsemanticsshouldbebridgedinaproperwayinordertoconstructaseamlessautomaticfacialexpressionsystemsatisfyingtheuserperception.Forthispurpose,weusetheAHPtoprovideasystematicalwaytoevaluatethefitnessofasemanticdescriptionforinterpretingtheemotionofafaceimage.Asemantic-basedlearningalgorithmisalsoproposedtoadapttheweightsoflow-levelvisualfeaturesforautomaticfacialexpressionrecognition.Theweightsarechosensuchthatthediscrepancybetweenthefacialexpressionrecognitionresultsobtainedintermsoflow-levelfeaturesandhigh-levelsemanticdescriptionissmall.Intherecognitionphase,onlythelow-levelfeaturesareusedtoclassifytheemotionofaninputfaceimage.Theproposedsemanticlearningschemeprovidesawaytobridgethegapbetweenthehigh-levelsemanticconceptandthelow-levelfeaturesforautomaticfacialexpressionrecognition.Experimentalresultsshowthattheperformanceoftheproposedmethodisexcellentwhenitiscomparedwiththatoftraditionalfacialexpressionrecognitionmethods.Ó2006ElsevierLtd.Allrightsreserved.

Keywords:Facialexpressionrecognition;Low-levelvisualfeature;High-levelsemanticconcept;Analyticalhierarchyprocess;Semanticlearning

1.Introduction

Thecommonmethodsformostofcurrenthuman–com-puterinteraction(HCI)techniquesarethroughthemodal-itiessuchas,keypress,mousemovement,orspeechinput.TheseHCItechniquesdonotprovidenaturalhuman-to-human-likecommunication.Theinformationaboutemo-tionsandthementalstateofapersoncontainedinhumanfacesisusuallyignored.Duetotheadvancesofartificialintelligenttechniquesinthepastdecades,itispossibletoenablecommunicationwithcomputersinanaturalway,

*Correspondingauthor.Tel.:+886224622192;fax:+886224623249.E-mailaddress:csc@mail.ntou.edu.tw(S.-C.Cheng).

similartoeverydayinteractionbetweenpeople,usinganautomaticfacialexpressionrecognitionsystem(Fasel&Luettin,2003).

Sincetheearly1970s,EkmanandFriesen(Ekman&Friesen,1978)hadperformedextensivestudiesofhumanfacialexpressionsanddefinedsixbasicemotions(happi-ness,sadness,fear,disgust,surprise,andanger).Eachofthesesixbasicemotionscorrespondstoauniquefacialexpression.TheyalsodefinedtheFacialActionCodingSys-tem(FACS),asystemprovidesasystematicalwaytoana-lyzefacialexpressionsthroughstandardizedcodingofchangesinfacialmotion.FACSconsistsof46ActionUnits(AUs)whichdescribebasicfacialmovements.Ekman’sworkinspiredmanyresearcherstoanalyzefacialfeatures

0957-4174/$-seefrontmatterÓ2006ElsevierLtd.Allrightsreserved.doi:10.1016/j.eswa.2006.04.019

S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–9587

usingimageandvideoprocessing.Bytrackingfacialfea-turesandmeasuringtheamountoffacialmovements,theyattempttocategorizedifferentfacialexpressions.Basedonthesebasicexpressionsorasubsetofthem,Suwa,Sugie,andFujimora(1978),andMaseandPentland(1991)per-formedearlyworkonautomaticfacialexpressionanalysis.Detailedreviewofmanyoftherecentworkonfacialexpres-sionanalysiscanrefertoFaselandLuettin(2003)andPan-ticandRothkrantz(2000).Allthesemethodsaresimilarinthattheyfirstextractsomefeaturesfromimageorvideo,thenthesefeaturesareusedasinputsintoaclassificationsystem,andtheoutcomeisoneofthepreselectedemotioncategories.Theydiffermainlyinthefeaturesextractedandintheclassifiersusedtoansweraninputfaceimage.Facialfeaturesusedforautomaticfacialexpressionanal-ysiscanbeobtainedusingimageprocessingtechniques.Ingeneral,thedimensionalityofthelow-levelvisualfeaturesusedtodescribeafaceexpressionishigh.PrincipalCompo-nentAnalysis(PCA),LinearDiscriminantAnalysis(LDA),DiscreteCosineTransform(DCT),WaveletTransform,etc.,arethecommonlyusedtechniquesfordatareductionandfeatureextraction(Calder,Burton,Miller,&Young,2001;Draper,Baek,Bartlett,&Beveridge,2003;Jeng,Yao,Han,Chern,&Liu,1993;Lyons,Budynek,&Akama-tsu,1999;Martinez&Kak,2001;Saxena,Anand,&Mukerjee,2004).Suchvisualfeaturescontainthemostdis-criminativeinformationandprovidemorereliabletrainingofclassificationsystems.Itisimportanttonormalizethevaluesthatcorrespondtofacialfeatureschangesusingthefacialfeaturesextractedfromtheperson’sneutralfaceinordertoconstructaperson-independentautomaticfacialexpressionrecognitionsystem.FACShasbeenusedtodescribevisualfeaturesinfacialexpressionrecognitionsys-tem(Tian,Kanad,&Cohn,2001).Furthermore,thelow-levelfacialfeatures–FacialAnimationParameters(FAPs)supportedbyMPEG-4standardarealsowidelyusedinautomaticfacialexpressionrecognition(Aleksic&Katsag-gelos,2004;Donato,Hager,Bartlett,Ekman,&Sejnowski,1999;Essa&Pentland,1997;Pardas&Bonafonte,2002).Fig.1showstheFAPsthatcontainsignificantinformationaboutfacialexpressionscontrollingeyebrow(group4)andmouthmovement(group8)(TextforISO/IECFDIS14496-2Visual,1998).

Inrecentwork,theapproachesforautomaticfacialfea-turerecognitioncanbeclassifiedintothreecategories

Fig.1.Outer-lipandeyebrowFAPs(Tianetal.,2001).

(Fasel&Luettin,2003).Intheimage-basedapproach,thewholefaceimage,orimagesofpartsoftheface,areprocessedinordertoobtainvisualfeatures.Theweightingsofdifferentpartsofthefaceshouldbedifferenttoimprovetheperformance.Forexample,thenosemovementobvi-ouslycontainslessinformationthaneyebrowandmouthmovementaboutfacialexpressions.Hence,theweightingofnosemovementshouldbedecreasedinordertoimprovetherecognitionaccuracy.Onthebasisofdeformationextraction,thefacialexpressionrecognitionprocessiscon-ductedthroughthedeformationinformationofeachpartoftheface.ThemodelstoextractdeformationinformationincludeActiveShapeModelandPointDistributionModel.Thecommonprocessforthesemodelsistoestimatethemotionvectorsofthefeaturepoints.Themotionvectorsarethenusedtorecognizefacialexpressions.Thedisadvan-tagesoftheapproachinclude(1)thefeaturepointsareusu-allysensitivetonoise(i.e.,lightconditionchange)andhenceunstable;(2)thecomputationalcomplexityofmotionestimationishigh.Inthegeometric-analysisapproach,theshapeandpositionofeachpartofthefaceareusedtorepresentthefaceforexpressionclassificationandrecognition.

Facialexpressionrecognitionisperformedbyaclassifier,whichoftenconsistsofmodelsofpatterndistri-bution,coupledtoadecisionprocedure.Awiderangeofclassifiers,coveringparametricaswellasnon-parametrictechniques,havebeenappliedtotheautomaticfacialexpressionrecognitionproblem(Fasel&Luettin,2003;Pantic&Rothkrantz,2000).Neuralnetworks(Tianetal.,2001),hiddenMarkovmodels(Aleksic&Katsagge-los,2004;Pardas&Bonafonte,2002),k-nearestneighborclassifiers(Bourel,Chibelushi,&Low,2002),etc.arecom-monlyusedtoperformclassification.

Althoughtherapidadvanceoffaceimageprocessingtechniques,suchasfacedetectionandfacerecognition,providesagoodstartingpointforfacialexpressionanaly-sis,thesemanticgapbetweenlow-levelvisualfeaturesandhigh-leveluserperceptionremainsasachallengetocon-structaneffectiveautomaticfacialexpressionrecognitionsystem.Facialfeaturessufferahighdegreeofvariabilityduetoanumberoffactors,suchasdifferencesacrosspeople(arisingfromage,illness,gender,orrace,forexam-ple),growthorshavingofbeardsorfacialhair,make-up,blendingofseveralexpressions,andsuperpositionofspeech-relatedfacialdeformationontoaffectivedeforma-tion(Boureletal.,2002).Low-levelvisualfeaturesareusu-allyunstableduetothevariationofimagingconditions.Itisveryimportanttointroducethesemanticknowledgeintotheautomaticfacialexpressionrecognitionsystemsinordertoimprovetherecognitionrate.However,researchintoautomaticfacialexpressionrecognitionsystemscapa-bleofadaptingtheirknowledgeperiodicallyorcontinu-ouslyhasnotreceivedmuchattention.Toincorporateadaptationintherecognitionframeworkisafeasibleapproachtoimprovetherobustnessofthesystemunderadverseconditions.

88S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–95

Inthispaperwepresentanautomaticfacialexpressionrecognitionsystemthatutilizesasemantic-basedlearningalgorithmusingtheanalyticalhierarchyprocess(AHP)(Min,1994;Saaty,1980).Ingeneral,humanemotionsarehardtoberepresentedusingonlythelow-levelvisualfea-turesduetothelackoffacialimageunderstandingmodels.Althoughtheeffectivenessoflow-levelfeaturesinauto-maticfacialexpressionrecognitionsystemshasbeenwidelystudied,thesuccessisshadowedbytheinnatediscrepancybetweenthemachineandhumanperceptiontotheimage.Thegapbetweenlow-levelvisualfeaturesandhigh-levelsemanticsshouldbebridgedinaproperwayinordertoconstructaseamlessautomaticfacialexpressionsystemsatisfyingtheuserperception.Forthispurpose,weusetheAHPtoprovideasystematicalwaytoevaluatethefit-nessofasemanticdescriptionforinterpretingtheemotionofafaceimage.Asemantic-basedlearningalgorithmisalsoproposedtoadapttheweightsoflow-levelvisualfea-turesforautomaticfacialexpressionrecognition.Theweightsarechosensuchthatthediscrepancybetweenthefacialexpressionrecognitionresultsobtainedintermsoflow-levelfeaturesandhigh-levelsemanticdescriptionissmall.Intherecognitionphase,onlythelow-levelfeaturesareusedtoclassifytheemotionofaninputfaceimage.Theproposedsemanticlearningschemeprovidesawaytobridgethegapbetweenthehigh-levelsemanticconceptandthelow-levelfeaturesforautomaticfacialexpressionrecognition.Experimentalresultsshowthattheperfor-manceoftheproposedmethodisexcellentwhenitiscom-paredwiththatoftraditionalfacialexpressionrecognitionmethods.

Theremainderofthispaperisorganizedasfollows:Sec-tion2ofthepaperdescribestheproposedsemantic-basedfacialrepresentationusingAHPindetail.Thentheadapta-tionschemeforchoosingtheweightsoflow-levelvisualfeaturesbyutilizingsemanticclusteringresultsispresentedinSection3.InSection4,someexperimentalresultsareshown.Finally,conclusionsaregiveninSection5.2.Semantic-basedfacerepresentationusinganalytichierarchyprocess

AHPproposedbySaaty(1980)usedasystematicalwaytosolvemulti-criteriapreferenceproblemsinvolvingqual-itativedataandwaswidelyappliedtoagreatdiversityofareas(Cheng,Chou,Yang,&Chang,2005;Lai,True-blood,&Wong,1999;Min,1994).Pairwisecomparisonsareusedinthisdecision-makingprocesstoformarecipro-calmatrixbytransformingqualitativedatatocrispratios,andthismakestheprocesssimpleandeasytohandle.Thereciprocalmatrixisthensolvedbyaweightingfindingmethodfordeterminingthecriteriaimportanceandalter-nativeperformance.TherationaleofchoosingAHP,despiteitscontroversyofrigidity,isthattheproblemtoassignthesemanticdescriptionstotheobjectsofanimagecanbeformulatedasamulti-criteriapreferenceproblem.AsshowninFig.2,thetwofaceimagesshouldbeclassified

Fig.2.Twofaceimagesof‘‘happiness’’emotionwithdifferentlow-levelvisualfeatures.

as‘‘happiness’’emotionusinghumanassessment,however,theouter-lipmovementforthetwofaceimagesismuchdif-ferent.Semanticknowledgeplaysanimportantroleinanautomaticfacialexpressionrecognitionsystemsuchthatthesystemfairlymeetsuserperception.ItisshowninourpreviousworkthattheAHPprovidesagoodwaytoevaluatethefitnessofasemanticdescriptionusedtorepre-sentanimageobject(Chengetal.,2005).2.1.AbriefreviewofAHP

TheprocessofAHPincludesthreestagesofproblem-solving:decomposition,comparativejudgments,andsyn-thesisofpriority.Thedecompositionstageaimsattheconstructionofahierarchicalnetworktorepresentadeci-sionproblem,withthetoplevelrepresentingoverallobjec-tivesandthelowerlevelsrepresentingcriteria,sub-criteria,andalternatives.Withcomparativejudgments,usersarerequestedtosetupacomparisonmatrixateachhierarchybycomparingpairsofcriteriaorsub-criteria.Ascaleofvaluesrangingfrom1(indifference)to9(extremeprefer-ence)isusedtoexpresstheuserspreference.Finally,inthesynthesisofprioritystage,eachcomparisonmatrixisthensolvedbyaneigenvectormethodfordeterminingthecriteriaimportanceandalternativeperformance.

ThefollowinglistprovidesabriefsummaryofallstepsinvolvedinAHPapplications:

1.Specifyaconcepthierarchyofinterrelateddecisioncri-teriatoformthedecisionhierarchy.

2.Foreachhierarchy,collectinputdatabyperformingapairwisecomparisonofthedecisioncriteria.

3.Estimatetherelativeweightingsofdecisioncriteriabyusinganeigenvectormethod.

4.Aggregatetherelativeweightsupthehierarchytoobtainacompositeweightwhichrepresentstherelativeimportanceofeachalternativeaccordingtothedeci-sion-maker’sassessment.OnemajoradvantageofAHPisthatitisapplicabletotheproblemofgroupdecision-making.Ingroupdecisionsetting,eachparticipantisrequiredtosetupthepreference

S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–9589

ofeachalternativebyfollowingtheAHPmethodandalltheviewsoftheparticipantsareusedtoobtainanaverageweightingofeachalternative.

2.2.SemanticfacialexpressionrepresentationusingAHPWeviewafaceimageasacompoundobjectcontainingmultiplecomponentobjectswhicharethendescribedbyseveralsemanticdescriptionsaccordingtoathree-levelconcepthierarchy.Theconcepthierarchy,showninFig.3,isusedtoassignthesemanticstoafacialexpressionforaninputfaceimage.Accordingtothehierarchy,themethodforinvolvingthesemanticfacialexpressionrecog-nitiontoafacedatabasebyAHPisproposedinthisstudy.Forthesakeofillustrationconvenience,theclassificationhierarchyisabbreviatedasFEChierarchy.

TherearesevensubjectsinthetoplevelofFEChierar-chy.Eachtop-levelsubjectcorrespondingtoafacialexpressioncategoryisthendividedintoseveralsub-sub-jectscorrespondingtothepartsofthefaceimagecontrol-lingthehumanemotion,andeachsub-subjectisagaindecomposedintoseveralLevel3subjectscorrespondingtothefacialanimationparametersinMPEG-7areusedtodescribeafacialexpression.Apathfromtheroottoeachleafnodeformsasemanticdescription,andmultiplesemanticdescriptionsarepossibletointerpretafacialobjectaccordingtodifferentaspectsofusernotion.Aques-tionarisesnaturally:istheweightofeachpathcodeofanimageobjectequivalent?Theanswertotheproblemisofcourseno.Somesemanticdescriptionsareobviouslymoreimportantthanothersforaspecificimageobject.Forexample,thesemanticdescription‘‘happiness’’ismoreimportantthanthecodewiththesemanticdescription

GoalFacial ExpressionLevel 1NeutralHappinessSadnessAngerFearSurpriseDisgustLevel 2Level 3EyebrowsEyesMouthHigherLowerCloseHigherHighereyebrowseyebrowsright cornerWide openlower lipThe left isThe right isLowerLowerright cornerlower liphigher thanhigher thanNarrowthe right the left Close the HigherCircular left & openleft cornershapeInner sidesInner sidesthe right LowerNarrowis faris closeleft cornermouthClose the HigherLowerHigherLopsidedinner sidesinner sidesright & openthe left upper lipmouthOthersOthersupper lipLowerOthershapeFig.3.Theconcepthierarchyofthefacialexpressionforinterpretinganinputfaceimage.‘‘sadness’’fortheimageobjectinFig.2(b)accordingtotheauthors’opinion.

2.3.Semantic-basedfacialexpressionrepresentationAssumethepathcodesofthesemanticclassificationhierarchyarenumberedfrom1ton.GivenafaceimageI,thecontentoftheimageisrepresentedbyasemanticvec-torwhichisdefinedas

nI¼ðs1;s2;...;snÞ;X

si¼1;ð1Þ

i¼1

wheresidenotestheweightingoftheithpathcode.Althoughthevalueofnislarge,inanyvectorrepresentinganimage,thevastmajorityofthecomponentswillbezero.Thereasonisthatthenumberofobjectsperceivedinanimageisgenerallysmall.

Assigningweightstothepathcodesinasemanticvectorisacomplexprocess.Weightscouldbeautomaticallyassignedusingtheobjectrecognitiontechniques.However,thisproblemisfarfrombeingtotallysolved.Insteadofthat,inthispaper,weightsareassignedusingtheanalyticalhierarchyprocess.Notethatthenumericalcharacteristicofaweightlimitsthepossibilityofassigningitdirectlythroughhumanassessment.OnemajoradvantageofusingAHPinassigningweightstothepathcodesisthatusersareonlyrequiredtosettherelativeimportanceofseveralpairsofsemanticdescriptions,andthenthevaluesofweightsareautomaticallycalculated.

Thejudgmentoftheimportanceofonesemanticdescriptionoveranothercanbemadesubjectivelyandcon-vertedintoanumericalvalueusingascaleof1–9where1denotesequalimportanceand9denotesthehighestdegreeoffavoritism.Table1liststhepossiblejudgmentsandtheirrepresentativenumericalvalues.

Thenumericalvaluesrepresentingthejudgmentsofthepairwisecomparisonsarearrangedtoformareciprocalmatrixforfurthercalculations.Themaindiagonalofthematrixisalways1.Usersarerequiredtoadoptatop-downapproachintheirpairwisecomparisons.Givenanimage,thefirststepoftheclassificationprocessusingAHPistochoosethelargeclassificationcodesandevaluatetheirrel-ativeimportancebyperformingpairwisecomparisons.Forexample,Fig.4(a)containingafaceimageisthetargetof

Table1

PairwisecomparisonjudgmentsbetweensemanticdescriptionsAandBJudgmentValuesAisequallypreferredtoB

1AisequallytomoderatelypreferredoverB2AismoderatelypreferredoverB

3AismoderatelytostronglypreferredoverB4AisstronglypreferredtoB

5AisstronglytoverystronglypreferredoverB6AisverystronglypreferredoverB

7AisverystronglytoextremelypreferredoverB8Ais

extremelypreferredtoB

9

90S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–95

Fig.4.Anexampleimagetobeclassified:(a)thefaceimage;(b)thecorrespondingreciprocalmatrixwithrespectto(a)forcalculatingthelocalweightingsofLevel1semanticdescriptionsininterpretingtheexpressionsoftheimage.

classification.Inthiscase,theimagecanbeclassifiedintothreeLevel1expressioncategories–‘‘Neutral’’N,‘‘Happi-ness’’H,and‘‘Surprise’’S.Fig.4(b)isthecorrespondingLevel1reciprocalmatrixM1forjudgingtherelativeimportanceofthethreesemanticdescriptions.TheentriesofM1canbedenotedas

NS2NwN=wNwS=wN

HwN=wHwH=wHwS=wH

SwN=wS

3

ð2Þ

M1¼

6

H4wH=wN

7;

wH=wS5wS=wS

codesusedtodescribethefaceimage.ItwouldbetoocumbersometoclassifyanimageusingAHPifthevalueofl·m·nisverylarge.Fortunately,thisproblemwouldnotoccurbecausemostofthefaceimagesdonothavealargeamountofpathcodestodescribethem.Mostofthemhaveatmost4–10pathcodesaccordingtoourexperience.Obviously,mostoftheweightingscorrespond-ingtosemanticdescriptionsinthesemanticvectorarezero.

3.Proposedsemantic-basedautomaticfacialexpressionrecognition

Theproposedautomaticfacialexpressionrecognitionsystemcanbeclassifiedintotwophases.Inthelearningphase,atrainingdatabaseisusedtoobtainthecorrectstructureusingthesemanticvectorsinlearningtheclassi-fier.ThesemanticvectorsofthetrainingsamplesobtainedfromAHPisfirstclusteredinordertochoosetheproperweightingsfortheextractedlow-levelvisualfeatures,whichareusedtocomputethesimilarityvaluebetweentwofaceimagesintherecognitionphase.Fig.5showstheblockdia-gramoftheproposedmethodandwillbediscussedindetaillater.

wherewN,wH,andwSaretherelativeimportancevalues(definedinTable1)forthethreesemanticdescriptionsN,H,and,S,respectively.Level1weightingsofthethreesemanticdescriptionsarethenobtainedfromM1.

Withoutloseofgenerality,letl,m,andnbethenumberofLevel1semanticdescriptions,thenumberofLevel2semanticdescriptionsforeachLevel1description,thenumberofLevel3semanticdescriptionforeachLevel2description,respectively.ForeachrowofLevel1recipro-calmatrixM1,wecandefineaweightingmeasurementas

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

a1a1laii1

ri¼Â1ÂÁÁÁÂi1;i¼1;...;l;ð3Þ1a1a2alwherea1iistherelativeimportantvalueoftheithLevel1

semanticdescription.ThenLevel1weightingsaredeter-minedby

,

lX

11wi¼rir1i¼1;...;l:ð4Þj;

j¼1

FaceimagesSemantic vectorextraction using AHPLow-level featureextraction Semantic vectorclusteringk-NNsearchingusing semantic vectorWeighting adaptationforlow-level featuresSimilarly,wecancomputeLevel2weightingsw2i;j;j¼1;...;mfortheithLevel1semanticdescriptionandLevel3weightingsw3i;j;k;k¼1;...;nfortheithLevel1semanticdescriptionandthejthLevel2semanticdescription.Final-ly,theentrypofthesemanticvectordefinedinEq.(1)iscomputedaswp¼ðiÀ1ÞÂlþðjÀ1ÞÂmþk¼

w1i

Âw2i;j

Âw3i;j;k:

ð5Þ

k-NNsearchingusing low-levelfeaturesLearning phaseSemanticknowledgeRecognition phaseInput faceimagesLow-level featureextraction k-NNsearchingusing low-levelfeaturesFacial expressionrecognitionNotethatthenumberofreciprocalmatrixesfortheimageisl·m·nanditisactuallyequaltothenumberofpath

Fig.5.Blockdiagramoftheproposedsemantic-basedautomaticfacialexpressionrecognitionsystem.S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–9591

3.1.Low-levelvisualfeatureextraction

Asmentionedabove,theconcepthierarchyshowninFig.3playstheroleinbridgingthegapbetweenthehigh-leveluserperceptionandlow-levelvisualfeaturesfortheproposedfacialexpressionrecognitionsystem.Actually,givenaninputfaceimage,thepossibilityoftheimagetobeclassifiedintoaLevel3subjectoftheconcepthierarchycanbemeasuredusingasetoflow-levelvisualfeatures.Forexample,onecanjudgewhetherthepositionsoftheeyebrowsofaninputfaceimageishigherthanthoseofthecorrespondingneutralfaceimagebymeasuringthepositionchangesofeyebrowsfromtheinputimagetotheneutralimage.

Inthestageoffeatureextraction,14characteristicpointsinafaceimagearefirstdetected,thentherelativefeaturedistances(a$l,n)amongthesepoints,showninFig.6,arecalculated.Notethatthesizesoftwooutputimagesofacameraforthesamefacearedifferentifweusetwodifferentfocuslengths,hence,thefeaturedistancesshouldbenormalizedinordertoeliminatetheeffectsofcameraoperations.Thedistancebetweentheinnercornersoftheeyesisusedtonormalizethefeaturedistanceduetothefactthattheinnercornersoftheeyesarerelativelysta-bletodetectusingimageprocessingtechniques.Thenor-malizedfeaturedistancesa0$l0arecomputedbya0¼an;b0¼bn;c0¼cdn;d0¼n;

e0¼e

n;

f0¼fn;g0¼gn;h0¼hin;i0¼n;

j0¼jn

;

k0¼k;l0ln¼n

:

ð6Þ

Finally,thenormalized12featuredistances,aslow-levelvisualfeatures,arefurthersubtractedfromthecorrespond-ingnormalizedfeaturedistancesofthecommonbaseim-ageofneutralexpression.

Fig.6.Theextractedlow-levelvisualfeatures:(a)thecharacteristicpointsinthefaceimage;(b)thedistancesamongthecharacteristicpointsfordescribingthemuscleactivities.

3.2.Semanticclusteringforfacialexpressions

Asmentionedabove,thesemanticinformationofeachtrainingfaceimagesisrepresentedasasemanticvector.However,theentriesofsemanticvectorsaremostlyzero.Thedimensionalityofsemanticvectorshouldbereducedinaproperwayinordertocompactthesemanticinforma-tion.Inthiswork,weusethewidelyusedK-meanscluster-ingtoclusterthesemanticvectorsofthetrainingdatabaseintoKsemanticclusters,whereeachofthemcarriesdiffer-entsemanticinformation.ThevalueofKwouldbe7(cor-respondingto‘‘neutral’’,‘‘happiness’’,‘‘sadness’’,‘‘fear’’,‘‘anger’’,‘‘surprise’’,and‘‘disgust’’expressioncategories)forautomaticfacialexpressionrecognitionifthenumberofsamplefaces,whichcoveralltypesoffacialexpressions,islargeenough.Ontheotherhand,thevalueofKcouldbelessthan7toreducetheeffectofthesmallsizeproblemthatincaseofthesmallsampletrainingdata,thesingularofwithin-classleadstoitsinversenotexisting.

Thesemanticdistanced(IA,IB)betweenwiththeðtwofaceimages

IB¼ðsðBÞðsemanticBÞðBÞ

vectorIA¼ðsAÞ;sðAÞðAÞ

12;...;sNÞand

1;s2;...;sNÞisdefinedas

dðA;BÞ¼XNhsðAÞ󰀃ðBÞ󰀄ðBÞ󰀃ÀsðAÞ

󰀄ii1Àsiþsi1i;ð7Þ

i¼1

whereNitemsðAÞ󰀃isthetotalofsemanticdescriptions.ðBÞ󰀄numberi1Àsi

þsðBÞ󰀃ðAÞ

󰀄

Thei1Àsiisactuallytheprobabil-ityofobjectsIAandIBdisagreeingwitheachotherontheithsemanticdescription.

Forthesakeofeasyreference,thesemanticclusteringusingtheK-meansalgorithmisdescribedbelow:

Step1:ForeachsemanticclusterSk,k=1,...,K,random

initialsemanticvectorsarechosenfortheclusterrepresentativesIk.

Step2:ForeverysemanticvectorI,thedifferenceiseval-uatedbetweenIandIk,k=1,...,K.IfdðI;IiÞStep3:Accordingtothenewclassification,Ikisrecalcu-lated.IfMkelementsareassignedtoSkthen

I1Mk

k¼XMIm;ð8Þ

km¼1whereIm,m=1,...,MkarethesemanticvectorsbelongingtoclusterSk.

Step4:IfthenewIkisequaltotheoldonethenstop,else

Step2.

3.3.Weightingadaptationforlow-levelvisualfeaturesOncethesemanticclustersareobtained,12low-levelvisualfeatures(cf.Fig.6)areextractedfromthecontentofeachdatabaseimage.Givenaqueryfaceimage,usersshouldnotberequiredtosettheweightingofeachfeaturetypeinordertorecognizesemanticallyrelevantexpression.

92S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–95

Unfortunately,thelow-levelfeaturesandthehigh-levelsemanticconceptsdonothaveanintuitiverelationship,andhence,theproblemofweightingsettingisnotatrivialjobtosolve.Thislimitstherecognitionaccuracyoftheautomaticfacialexpressionrecognitionsystem.Inthispaper,weproposeamethodforautomaticallydeterminingtheweightingsofthelow-levelvisualfeaturesforeachsemanticcluster.

Awinnow-likemistake-drivenlearningalgorithmisusedtolearnthediscriminantfunctiong(x)sfordefiningthedecisionboundariesofsemanticclustersintermsoflow-levelvisualfeatures.Theparadigmfollowedinthelit-eratureforlearningfromlabeledandunlabeleddataisbasedoninducingclassifiersfromthesemanticvectorsofthetrainingsamples.Theinducedclassifiersarethenusedtopredicttheclassifiersforpatternsintermsoflow-levelvisualfeaturessuchthattheclassificationresultsoftheclassifiersintermsoflow-levelvisualfeaturesagreeswiththoseoftheclassifiersintermsofhigh-levelsemanticvectors.

LetFq¼ðfðqÞðqÞðqÞ

turesofaninput1;fimage2;...;fq.GivenmÞbethelow-levelvisualfea-asemanticclusterSicon-tainingnimages,thenimagescanberankedbyqintermsofsemanticinformationusing(7)asfollowingordersetSi=(I1,I2,...,In).Theproposedweightingadaptationalgorithmaimsatchoosingweightingsforlow-levelvisualfeaturessuchthatSicanbeobtainedforansweringqintermsoflow-levelvisualfeatures.Moreconcretely,wecandefineacostfunctionJð~aÞtobeminimizedas#Jð~aÞ¼

XnjtðSÞðLÞ

jÀtjj;

ð9Þ

j¼1

where~aistheturesandtðSÞweightingðLÞ

vectorforthelow-levelvisualfea-jandtjaretheranksofthejthimagewithre-specttoaqueryimageintermsofhigh-levelsemanticvectorsandlow-levelvisualfeatures,respectively.Inaddi-tion,thedistancebetweenqandanimageIinSiisdefinedas

Dðq;IÞ¼

Xma󰀃jfðqÞðIÞ

󰀄2jÀfj:ð10Þj¼1

Theproposedlearningalgorithmusesasetofweaklearnerstoworkinasinglefeatureeachtime.ThevalueofakcorrespondingtothekthfeatureshoulddecreaseifJk(ak)>J(a)whereJk(ak)isthecostfunctionusingonlythekthfeatureofq.Theproposedlearningalgorithmisbrieflydescribedasfollows:

Algorithm.Weightingadaptation

Input:asemanticclusterSicontainingnimagesandthenumberofiterationsT.

Output:aweightingvector~aforSi.Method:

(1)Initializeweightsað0Þ

fort=1,...,Tk¼1=m;k¼1;...;m.(2)Do:

(3)ForeachimageqinSido

(3.1)AnswerqandrankthenimagesinSiusing(7).

(3.2)AnswerqandrankthenimagesinSiusing(10)and

computethevalueofJð~aÞusing(9).(3.3)Dofork=1,...,m:

(3.3.1)AnswerqandrankthenimagesinSiusing

thekthlow-levelfeatureonlythevalueofJkðaðtÞ

andcompute

(3.3.2)UpdatetheweightkÞ.

factoraðtÞ

kasfollows:

8ðaðtÞ

aðtþ1Þ

>kcifðJkÞ>Jð~aðtÞÞÞ;k

¼

k>:ð11Þ

caðtÞ

k

ifðJkðaðtÞ

wherecistheregulationfactor.Inthisimple-mentation,c=1.05.

(3.4)Normalizetheweightssothattheyarea

distribution,

,

aðtþ1ÞaðtÞXmðtÞ

k¼kaj:ð12Þ

j¼1

Foreachsemanticcluster,weperformtheweighting

adaptationalgorithmtosetthevaluesofweightsforlow-levelvisualfeatures.Hence,theweightingvectorsaredif-ferentamongdifferentsemanticclusters.Foreachsemanticcluster,thesystemlearnsthedecisionboundaryonthelow-levelfeaturespacesupervisedbythedecisionboundaryonthehigh-levelsemanticspace.Thegoalofourschemeistoadaptivelylearnboundariestofiltertheimagesforlatestagefacialexpressionrecognition.

3.4.Facialexpressionrecognitionconstrained

bytheboundary

Foraninputfaceimage,aftertheweightingvectorofthelow-levelvisualfeaturesforeachsemanticclusterislearned,thedatabaseimagesoflargeweightedEuclideandistances(see(10))withrespecttotheinputimagearefil-tered.Moreconcretely,aknearestneighbor(k-NN)searchingalgorithmintermsoflow-levelvisualfeaturesisperformedtofindthetopknearestneighborsfromthetrainingdatabasefortheinputfaceimage.Thetopksim-ilardatabaseimagesareusedtodecidethefinalexpressioncategoryfortheinputimageaccordingtotheirsemanticinformation.Notethateachtrainingfaceimagebelongstoaspecificsemanticcluster,andhence,theweightingvec-torofthetrainingimagemustberetrievedinadvanceinordertocomputeweightedEuclideandistancebetweenthetrainingimageandtheinputimage.

AccordingtotheconcepthierarchyshowninFig.3,asemanticvectorincludessevensub-vectors,eachofthemconsistsof27dimensions.Theprobabilityoffacialexpres-sionsonthebasisofasemanticvector~s¼ðs1;s2;...;s189Þcanbeobtainedfrom

S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–9593

8

>>>>>>p>neutral¼P27si;>>i¼1>>>>>>>p54happiness¼Ps>i;>>i¼28>>>>>>P81>p>sadness¼si;>>p1anger¼P

08si;ð13Þ

>>>i¼82>>>>>>>pfear¼135Psi;>>>i¼109>>>1>>>P62>psurprise¼si;>>i¼136>>>>¼

1:pdisgustP89si:i¼163

Givenaninputfaceimage,wecanclassifytheinputimageintothefacialexpressioncategoryofthelargestprobabilityvalueaccordingtoitssemanticvector.

Theproposedautomaticfacialexpressionrecognitionalgorithmisbrieflydescribedasfollows:

Algorithm.Theproposedrecognitionstrategy

Input:AtrainingdatabaseTDandaninputfaceimageq.

Output:Theexpressioncategoryoftheinputimage.Method:

(1)Performfeatureextractionprocesstoobtainthelow-levelvisualfeaturesFqforq.

(2)Performak-NNsearchingprocesstofindthetopksimilardatabaseimagesandformacandidatesetCusingFq.

(3)Computetheinterpolatedsemanticvector󰀂squsingCasfollows:

󰀂sq¼

XjCjaj~sj;ð14Þ

j¼1

where~sjisthesemanticvectorofthejthimageIjinCandajistheweightof~sj.Thevalueofajisob-taineda󰀅by

1ÀDðq;IjÞ󰀆󰀇X󰀅󰀆

D1ÀDðq;IiÞ

j¼;ð15Þ

maxi¼1;...;jCjDmaxwhereDmax=max[D(q,Ij),j=1,...,jCj].

(4)Accordingto󰀂sq,computetheprobabilityvaluesofalltheexpressioncategoriesusing(13).

(5)Outputthefacialexpressioncategoryforqastheexpressioncategoryofthelargestprobabilityvalue.Obviously,therecognitionrateoftheproposedmethoddependsonthenumberofnearestneighborsusedtointer-polatethesemanticvectoroftheinputimageq.Theunder-lyingideaoftheapproachistousevotingschemeforthepurposeofreducingtheeffectofoutlinerscausedbyusinglow-levelvisualfeaturesinretrievingsimilarimagesfrom

thedatabase.Inthissystem,fivenearestneighborsare

enoughtopromotetherobustnessoftheproposedmethodaccordingtoourexperimentalresults.4.Experimentalresults

Inordertoevaluatetheproposedapproach,aseriesofexperimentswasconductedonanIntelPENTIUM-IV1.5GHzPC,andtheJAFFE(JapaneseFemaleFacialExpression)facedatabaseincludingtenpersonsoffivetypesoffacialexpression(Lyonsetal.,1999).Foreachper-sonthereareonaverage10faceimages.Theimagesinthedatabasearedividedintotwoparts:oneisthetrainingdatabaseandtheotheristhetestdatabase.EachimageinthetrainingdatabaseisfirstanalyzedbytheAHPfortestingthesemanticlearningapproach.Testimagesarerandomlyextractedfromthetestdatabase.Furthermore,somefaceimagesaredirectlyextractedfromaCCDcam-eraastheinputimagestotesttheproposedrecognitionsystem.

Ingeneral,givenaninputfaceimage,theexpressioncat-egorythattheinputimagebelongstomightbedifferentthroughdifferenthumanassessment.Thisleadstoadiffi-cultsituationtobuildthe‘‘groundtruth’’forevaluatingtherecognitionperformanceofasystem.TheJAFFEdata-baseprovidessemanticinformation,whichisassessedbyagroupofexperts,toeachimageinit.InordertotesttheeffectivenessofanalyzingthefacialexpressionusingAHP,foreachimageinthetrainingdatabase,Table2com-paresthesemanticvectorswiththeannotationsprovidedbytheJAFFEdatabase.Thelabelingresultsforbothmethodsaremuchsimilarwitheachother.Accordingly,thesemanticknowledgeoffacialexpressionbuiltbyAHPistrustable.Inaddition,AHPprovidesasystematicwaytogeneratethesemanticinformationforafaceimageratherthanlabelingtheimagebyintuition,whichisnotatrivialjobtodoevenforanexpert.

Theweightingadaptationapproachplaysanimportantroleinimprovingtherecognitionperformanceofthepro-posedautomaticfacialexpressionrecognitionsystem.Tables3and4showtheconfusionmatricesforthesystemwithoutandwiththeproposedweightingadaptationalgo-rithm,respectively.Therecognitionrateoftheproposedmethodisimprovedfrom67.6%to85.2%.Aninterestingresultcanbeseenfromtheexperimentalresults:theinducedsemanticknowledgeusingAHPcannotimprove

Table2

ConfusionmatrixforlabelingfaceimageusingAHPandinadirectassignmentprovidedbytheJAFFEdatabase(Lyonsetal.,1999)

Neutral

HappinessAngerSadnessSurpriseNeutral29/300010Happiness132/34001Anger0026/3031Sadness22127/320Surprise

0

1

1

0

28/30

94S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–95

Table3

Confusionmatrixforthesystemwithoutusingtheweightingadaptationalgorithm

Neutral

NeutralHappinessAngerSadnessSurpriseAverage

22/243171

Happiness011/17002

Anger005/970

Sadness20310/250

Surprise030119/22

Recognitionrate(%)926555408667.6

Table4

Confusionmatrixfortheproposedsystemusingtheweightingadaptationalgorithm

Neutral

NeutralHappinessAngerSadnessSurpriseAverage

20/240030

Happiness116/17001

Anger007/910

Sadness30219/250

Surprise010221/22

Recognitionrate(%)839477769685.2

Table5

Confusionmatrixforthesystemusingthemulti-layerperception(Tianetal.,2001)

Neutral

NeutralHappinessAngerSadnessSurpriseAverage

17/245074

Happiness211/17011

Anger005/940

Sadness50413/252

Surprise010015/22

Recognitionrate(%)716556526862.4

Fig.7.Theuserinterfaceoftheproposedmethod.

therecognitionperformancefortheinputimagesthatbelongtothe‘‘neutral’’expressioncategory.Actually,manytestfaceimagesmightcontainmultipleexpressionsespeciallywhenthetestimageislabeledas‘‘neutral’’expression.However,thewaytointerpretaninputimageinmultipleexpressionsisadoptedinourapproach.

S.-C.Chengetal./ExpertSystemswithApplications33(2007)86–9595

Inordertofurtherverifytheeffectivenessofthepro-posedmethod,theautomaticfacialexpressionrecognitionsystemusingtheneuralnetworktechnique,i.e.,multi-layerperceptionisalsosimulatedforcomparison(Tianetal.,2001).Table5showstherecognitionrateoftheneuralnet-workapproach.Accordingly,theperformanceofthepro-posedmethodoutperformstheneuralnetworkapproach.Fig.7isarecognitionexampleusingtheuserinterfaceoftheproposedsystem.5.Conclusion

Inthispaper,wehavepresentedanautomaticfacialexpressionrecognitionsystemutilizingthesemanticknowl-edgeusingAHP.Theintroductionofthesemanticknowl-edgeusinghumanassessmentbridgesthegapbetweenthelow-levelvisualfeaturesandthehigh-levelsemanticcon-cept.Inconclusions,thecontributionsoftheproposedapproachareasfollows:(1)aframeworktoquantizethequalitativedataofuserperceptionusingAHPisdevelopedtosemanticallydescribefacialexpressions;(2)asemanticlearningutilizingtheproposedweightingadaptationalgo-rithmisimplemented;(3)asemantic-basedautomaticfacialexpressionrecognitionsystemisdeveloped.Experi-mentalresultsshowtheeffectivenessofbridgingthelow-levelvisualfeaturesandthehigh-leveluserperceptionbyadaptivelytuningtheweightsoflow-levelvisualfeatures.Thedeficienciesoftheproposedapproachinclude(1)thesizeoftrainingdatabaseisnotlarge–thesmallsizeprobleminmachinelearningontheproposedmethodshouldbeexploredindetailinordertofurtherimprovetheperformanceofthesystem;(2)itisexpectedthattheuseofadditionalvisualinformationaboutfacialexpres-sionswouldfurtherimproverecognitionperformance.Fur-thermore,semantic-basedandsoftcomputingmethodsbasedonimage-basedfacialfeaturescanbecombinedtoconstructasystemwithmachineintelligence.Acknowledgement

ThisworkhasbeensupportedinpartbytheNationalScienceCouncil,TaiwanGrantsNSC93-2213-E-327-002andNSC94-2213-E-327-010.References

Aleksic,P.S.,&Katsaggelos,A.K.(2004).Automaticfacialexpression

recognitionusingfacialanimationparametersandmulti-streamHMMs.InProceedingsofthe8thIEEEinternationalconferenceonautomaticfaceandgesturesrecognition.

Bourel,F.,Chibelushi,C.C.,&Low,A.A.(2002).Robustfacial

expressionrecognitionusingastate-basedmodelofspatially-

localisedfacialdynamics.InProceedingsofthefifthIEEEinterna-tionalconferenceonautomaticfaceandgesturerecognition(pp.106–111).

Calder,A.J.,Burton,A.M.,Miller,P.,&Young,A.W.(2001).A

principalcomponentanalysisoffacialexpressions.VisionResearch,41,1179–1208.

Cheng,S.-C.,Chou,T.-C.,Yang,C.-L.,&Chang,H.-Y.(2005).A

semanticlearningforcontent-basedimageretrievalusinganalyticalhierarchyprocess.ExpertSystemswithApplications,28,495–505.

Donato,G.,Hager,S.,Bartlett,C.,Ekman,P.,&Sejnowski,J.(1999).

Classifyingfacialactions.IEEETransactionsonPatternAnalysisandMachineIntelligence,21(10),974–989.

Draper,B.A.,Baek,K.,Bartlett,M.S.,&Beveridge,J.R.(2003).

RecognizingfaceswithPCAandICA.ComputerVisionandImageUnderstanding,91,115–137.

Ekman,P.,&Friesen,W.(1978).Facialactioncodingsystem.PaloAlto,

CA:ConsultingPsychologistsPress.

Essa,I.,&Pentland,A.(1997).Coding,analysis,interpretationand

recognitionoffacialexpressions.IEEETransactionsonPatternAnalysisandMachineIntelligence,19(7),757–763.

Fasel,B.,&Luettin,J.(2003).Automaticfacialexpressionanalysis:A

survey.PatternRecognition,36,259–275.

Jeng,S.H.,Yao,H.Y.M.,Han,C.C.,Chern,M.Y.,&Liu,Y.T.(1993).

Facialfeaturedetectionusinggeometricalfacemodel:Anefficientapproach.PatternRecognition,31(3),273–282.

Lai,V.S.,Trueblood,R.P.,&Wong,B.K.(1999).Softwareselection:A

casestudyoftheapplicationoftheanalyticalhierarchyprocesstotheselectionofmultimediaauthoringsystem.InformationandManage-ment,36,221–232.

Lyons,M.J.,Budynek,J.,&Akamatsu,S.(1999).Automaticclassifica-tionofsinglefacialimages.IEEETransactionsonPatternAnalysisandMachineIntelligence,21(12),1357–1362.

Martinez,A.M.,&Kak,A.C.(2001).PCAversusLDA.IEEE

TransactionsonPatternAnalysisandMachineIntelligence,23(2),228–233.

Mase,K.,&Pentland,A.(1991).Recognitionoffacialexpressionfrom

opticalflow.TransactionsofIEICE,E,74(10),3474–3483.

Min,H.(1994).Locationanalysisofinternationalconsolidationterminals

usingtheanalyticalhierarchyprocess.JournalofBusinessLogistics,15(2),25–44.

Pantic,M.,&Rothkrantz,L.J.M.(2000).Automaticanalysisoffacial

expressions:Thestateoftheart.IEEETransactionsonPatternAnalysisandMachineIntelligence,22(12),1424–1445.

Pardas,M.,&Bonafonte,A.(2002).Facialanimationparameters

extractionandexpressionrecognitionusingHiddenMarkovModels.SignalProcessing:ImageCommunication,17,675–688.

Saaty,T.L.(1980).Theanalytichierarchyprocess.NewYork:McGraw-Hill.

Saxena,A.,Anand,A.,&Mukerjee,A.(2004).Robustfacialexpression

recognitionusingspatiallylocalizedgeometricmodel.InternationalConferenceonSystematics,12(15),124–129.

Suwa,M.,Sugie,N.,&Fujimora,K.(1978).Apreliminarynoteon

patternrecognitionofhumanemotionalexpression.InProceedingsofthe4thinternationaljointconferenceonpatternrecognition(pp.408–410).

TextforISO/IECFDIS14496-2Visual(1998).ISO/IECJTC1/SC29/

WG11N2502,November.

Tian,Y.-L.,Kanad,T.,&Cohn,J.F.(2001).Recognizingactionunitsfor

facialexpressionanalysis.IEEETransactionsonPatternAnalysisandMachineIntelligence,23(2),97–115.

因篇幅问题不能全部显示,请点此查看更多更全内容

Top