Available via license: CC BY 4.0
Content may be subject to copyright.
Electronics2020,9,57;doi:10.3390/electronics9010057www.mdpi.com/journal/electronics
Article
3‐DSynapseArrayArchitectureBasedonCharge‐Trap
FlashMemoryforNeuromorphicApplication
Hyun‐SeokChoi1,YuJeongPark2,Jong‐HoLee3andYoonKim1,*
1SchoolofElectricalandComputerEngineering,UniversityofSeoul,Seoul02504,Korea;cawai7@naver.com
2AppliedMaterialsKorea,Ltd.,Hwaseong‐si,Gyeonggi‐do18364,Korea;pyoojeng@naver.com
3DepartmentofElectricalandComputerEngineering,SeoulNationalUniversity,Seoul08826,Korea;
jhl@snu.ac.kr
*Correspondence:yoonkim82@uos.ac.kr;Tel.:+82‐02‐6490‐2352
Received:29November2019;Accepted:29December2019;Published:30December2019
Abstract:Inordertoaddressafundamentalbottleneckofconventionaldigitalcomputers,thereis
recentlyatremendousupsurgeofinvestigationsonhardware‐basedneuromorphicsystems.To
emulatethefunctionalitiesofartificialneuralnetworks,varioussynapticdevicesandtheir2‐D
cross‐pointarraystructureshavebeenproposed.Inourpreviouswork,weproposedthe3‐D
synapsearrayarchitecturebasedonacharge‐trapflash(CTF)memory.Ithastheadvantagesof
high‐densityintegrationof3‐Dstackingtechnologyandexcellentreliabilitycharacteristicsof
matureCTFdevicetechnology.Thispaperexaminessomeissuesofthe3‐Dsynapsearray
architecture.Also,weproposeanimprovedstructureandprogrammingmethodcomparedtothe
previouswork.Thesynapticcharacteristicsoftheproposedmethodarecloselyexaminedand
validatedthroughatechnologycomputer‐aideddesign(TCAD)devicesimulationanda
system‐levelsimulationforthepatternrecognitiontask.Theproposedtechnologywillbethe
promisingsolutionforhigh‐performanceandhigh‐reliabilityofneuromorphichardwaresystems.
Keywords:3‐Dneuromorphicsystem;3‐Dstackedsynapsearray;charge‐trapflashsynapse
1.Introduction
Neuromorphicsystemshavebeenattractingmuchattentionfornext‐generationcomputing
systemstoovercomethevonNeumannarchitecture[1–5].Theterm“neuromorphic”referstoan
artificialneuralsystemthatmimicsneuronsandsynapsesofthebiologicalnervoussystem[3].A
neurongeneratesaspikewhenamembranepotentialwhichistheresultofthespatialandtemporal
summationofthesignalreceivedfromthepre‐neuronexceedsathreshold,andthegeneratedspike
istransmittedtothepost‐neuron.Asynapsereferstothejunctionbetweenneurons,andeach
synapsehasitsownsynapticweightwhichistheconnectionstrengthbetweenneurons[6].Ina
neuromorphicsystem,synapticweightcanberepresentedbytheconductanceofsynapsedevice.
Therequirementsofasynapsedevicetoimplementaneuromorphicsystemareasfollows:
smallcellsize,low‐energyconsumption,multi‐leveloperations,symmetricandlinearweight
change,highenduranceandcomplementarymetal‐oxidesemiconductor(CMOS)compatibility[5].
Variousmemorydevices,suchasstaticrandom‐accessmemories(SRAM)[7],resistive
random‐accessmemories(RRAM)[8],phasechangememories(PCM)[9],floatinggate‐ memories
(FG‐memory)[10]andcharge‐trapflashmemories[11]havebeenproposedtoimplementthe
synapseoperation.Amongthem,charge‐trapflash(CTF)deviceshavegoodCMOScompatibility
andexcellentreliability[12–15].
Inourpreviouswork,weproposeda3‐Dstackedsynapsearraybasedonachargetrapflash
(CTF)device[11].Three‐dimensionalstackingtechnologyiscurrentlyusedinthecommercialized
NotAND(NAND)flashmemoryproductsforultra‐highdensity[14].Similarly,a3‐Dstacked
Electronics2020,9,572of10
synapsearrayhastheadvantageofchip‐sizereductionwhenimplementingvery‐large‐size
artificialneuralnetworks.Consequently,ithasthepotentialtobeapromisingtechnologyfor
implementingneuromorphichardwaresystems.Forthedesignofthe3‐Dstackedsynapsearray
architecture,thereareseveralissues.Atthefullarraylevel,howtooperateeachlayerselectively
andhowtoefficientlyformthemetalinterconnectswithperipheralcircuitsarecriticalissues.Atthe
devicelevel,howtoimplementaccuratesynapticweightlevelswithlowenergyconsumptionisan
importantissue.Especially,linearandsymmetricsynapticweight(conductance)modulationsare
essentialtoimprovetheaccuracyofneuromorphichardwaresystems[1–4].
Inthispaper,weexaminetheseissuesandsuggesttwoimprovementsintermsofan
architecturedesignandadeviceoperationmethod.Therestofthepaperisstructuredasfollows:
Section2containsdesignmethodsbasedontheviewpointofafull‐chiparchitecture.Inthissection,
wereviewthe3‐Dstackedsynapsearraystructuredevelopedinthepreviouswork[11]and
proposeanimprovedversionofthe3‐Dstackedsynapsearrayarchitecturetosolvetheunwanted
problemofthepreviousversion.InSection3,weproposeanimprovedprogrammingmethodto
obtainlinearandsymmetricconductancechanges.Usingapatternrecognitionapplicationwiththe
ModifiedNationalInstituteofStandardsandTechnology(MNIST)database,wedemonstratethe
improvementoftheproposedmethod.
2.DesignMethodsof3‐DSynapseArrayArchitecture
Ingeneral,alarge‐sizeartificialneuralnetworkthathasalargenumberofsynapticweightsand
neuronlayersisrequiredtoobtainhighperformanceartificialintelligencetasks.Inthecaseofthe
ImageNetclassificationchallenge,state‐of‐the‐artdeepneuralnetwork(DNN)architectureshave
5~155Msynapticweightparameters[16].Inordertoimplementefficientlyalarge‐sizeartificial
neuralnetworkonalimited‐sizehardwarechip,weproposedthe3‐Dstackedsynapsearray
structure(Figure1)inthepreviouswork[11].
Figure1.3‐Dsynapsearraystructure[11].(a)3‐Dstackedsynapsedevice;(b)Unitsynapsecell
structure.
UnitsynapsecelliscomposedoftwoCTFdeviceshavingtwodrainnodes(D(+),D(−))and
commonsourcenode(S).TheD(+)partisconnectedtotheoutputneuroncircuittoincrease
membranepotential,actingasanexcitatorysynapticbehavior.TheD(−)partisconnectedtothe
outputneuroncircuittodecreasemembranepotential,actingasaninhibitorysynapticbehavior.By
usingthisconfiguration,itcanberepresentedthenegativeandpositiveweightatthesametime.As
Electronics2020,9,573of10
summarizedinTable1,theCTFdevicehasseveraladvantagesoverothernon‐volatilememory
devices.First,itdoesnotneedanadditionalselectordevicebecausethethree‐terminal
MOSFET‐basedunitcellhasabuilt‐inselectionoperation.Second,ithasperfectCMOS
compatibility.Third,thelinearandincrementalmodulationoftheweight(conductance)canbe
moreeasilyachievedbecauseitsconductanceisdeterminedbythenumberoftrappedcharges.
Fourth,ithasgoodretentionreliabilitycharacteristics.Ontheotherhand,thedrawbackofCTFis
largepowerconsumptionduringprogramoperation.Therefore,CTFdevicesarethebestsolution
foroff‐chiplearning‐basedneuromorphicsystemswherefrequentweightupdatesdonotoccur.
Table1.Comparisonbetweennon‐volatilememorydevicesforneuromorphichardwaresystems.
RRAMPCMSTT‐MRAMCTF
DeviceStructure2terminals2terminals2terminals3terminals
Selectorneededneededneededunneeded
CellSize4~12F24~12F26~20F24~8F2
CMOSCompatibilitygoodgoodmoderateverygood
Multi‐LevelOperationgoodgoodmoderateverygood
WeightChangeabruptSETabruptRESETstochasticchangegoodsymmetric
WriteLatency20~100ns40~150ns2~20ns>1μs
WriteEnergylowmidlowmid~high
Retentionmoderategoodgoodverygood
Theproposed3‐Dstackedsynapsearraystructureisbasedontheword‐linestackingmethod
whichissimilartothecommercializedV‐NANDflashmemory.Therefore,ithastheadvantageof
utilizingtheexistingstableprocessmethodsusedinV‐NANDflashmemory.
Akeyissueinthedesignof3‐Dstackedsynapsearrayarchitectureisthemetalinterconnection.
Forexample,a4‐layerstackedsynapsearraywouldhavefourtimesasmanywordlinesasa2‐D
synapsearray.Iftheword‐line(WL)decoderisconnectedbyaconventionalmetalinterconnection
method,theverticallengthoftheWLdecoder(HWL_Decoder)willincreaseasillustratedinFigure
2,resultinginanenormouslossofareaefficiencyintermsoffull‐chiplevelarchitecture.Tosolve
thisissue,weproposedthesmartdesignofalayerselectdecoderwith3‐Dmetallineconnectionin
thepreviouswork[11].AsshowninFigure3a,theareaofWLdecoderisnotincreased,andalayer
selectdecoderisaddedtoselectivelyoperateeachstackedlayer.Alayerselectdecoderdeliversthe
gatevoltagesgeneratedbytheWLdecodertotheWLsoftheselectedlayer.Itisimportanttonote
thattheverticallengthofalayerselectdecoderisthesameasthatoftheWLdecode,andthe
horizontallengthisonly4F×NwhereFistheminimumfeaturesizeandNisthenumberofstaked
layers.Thespecificstructureofthetransistorsandmetalinterconnectsisdepictedinourprevious
paper[11].
Electronics2020,9,574of10
Figure2.Metalinterconnectionschemeofsynapsearrayarchitecture.(a)2‐Dneuromorphicsystem
architecture;(b)3‐Dneuromorphicsystemarchitecture(abaddesignexample).
Figure3.Schematicoftheproposed3‐Dsynapsearrayarchitecture.(a)Metalinterconnectionofa
full‐chiparchitecture;(b)Eachsynapselayerconfigurationtoimplementartificialneuralnetwork.
Thetop‐viewlayoutofthe3‐DsynapsearrayarchitectureisillustratedinFigure4.Thelayer
selectdecoderiscomposedofpasstransistors.Thepasstransistorsarearrangednexttoeachword
lineandareconnectedone‐to‐onewitheachWLcontact.Thegatenodesofthepasstransistorsare
verticallyconnectedtoformalayerselectline(LSL)thatiscontrolledbyLSLcontrolcircuit.
Throughthisconfiguration,eachstackedlayercanbeselectivelyoperatedwhilemaintaining
compactfull‐chipconfigurationefficiency.Forexample,iftheturn‐onvoltageisappliedtoL4and
theturn‐offvoltagesareappliedtoL1~L3,passtransistorscorrespondingtoL=4areactivated.
Consequently,theWLvoltagesgeneratedintheWLdecoderaretransferredtotheforth‐layerWLs
(L=4).
Inthispaper,weproposedanimprovedarchitecturedesigncomparedtothepreviouswork,
addingthegroundselectdecoderasshowninFigure4.Ifthereisonlyalayerselectdecoder,the
WLsoftheunselectedstackedlayerareonafloatingstatebecausetheyarenotconnectedtotheWL
decoder.Inthiscase,thepotentialoftheWLsoftheunselectedlayervariesduetothecapacitive
couplingbetweenthestackedWLs.Intheworstcase,theWLsofunselectedlayerslocatedaboveor
below(L=n−1orL=n+1)theselectedlayer(L=n)maybeboostedtogetherwhenahighvoltage
isappliedtotheselectedWLs.Tosolvethisinherentriskofthearchitectureofthepreviousversion,
agroundselectdecoderthatappliesaturn‐offvoltage(0V)totheWLsoftheunselectedlayeris
addedtotherightsideofthemain3‐DstackedsynapsearrayasshowninFigure4.
Electronics2020,9,575of10
Figure4.Topviewimageoftherevisedsynapsearrayarchitecture.
Thedetailedmanufacturingprocessofthe3‐Dsynapsearraywasdescribedinourprevious
paper[11].Therevisedsynapsearrayarchitecturecanbemadewiththesameprocessmethod.
Sincethenewlyaddedgroundselectdecoderstructurehasthesamestructureasthelayerselect
decoder,itcanbemadebyjustaddingthesamelayoutasthelayerselectdecoder.
TovalidatethesynapticoperationsofthedesignedCTF‐basedsynapsedevice,thetechnology
computer‐aideddesign(TCAD)devicesimulation(SynopsysSentaurus[17])wasused.Thespecific
deviceparametersaresummarizedinTable2.Electricalcharacteristicsofthedesignedsynapse
devicearediscussedinthenextchapter.
Table2.Physicalparametersofthedeviceusedforelectricalsimulation.
Value
LS=LD50nm
LCH100nm
TCH10nm
TO/N/O3/6/6nm
W
WL=
W
S/D100nm
3.Results
3.1.SynapseDeviceOperation
Intheproposedsynapsearray(Figure3b),synapticweight(wij)oftheartificialneuralnetwork
isrepresentedasfollows:
wij=G+ij−G−ij.(1)
AsdepictedinFigure3b,G+ijandG−ijaretheconductancesoftheD(+)CTFdeviceandD(−)CTF
device,respectively.Eachconductanceisdeterminedbytheamountoftrappedchargeineach
charge‐traplayer(siliconnitride).Fortheconductancemodulation,hot‐electroninjection(HEI)and
hot‐holeinjection(HHI)canbeusedasachargeinjectionmechanism.Thepotentiationprocessof
increasingthesynapticweightcanbeperformedbyincreasingG+ijanddecreasingG−ij.Ontheother
hand,thedepressionprocessofdecreasingthesynapticweightcanbecarriedoutbydecreasingG+ij
andincreasingG−ij.Usingatechnologycomputer‐aideddesign(TCAD)devicesimulation(Synopsys
Sentaurus),weverifytwopulseschemesforthemodulationofsynapticweight.Asuccessive‐pulse
programmingschemeandtheincremental‐step‐pulseprogramming(ISPP)schemeareillustratedin
Figure5a,b,respectively.
Electronics2020,9,576of10
Figure5.Programmingschemesforsynapticweight(conductance).(a)Successive‐pulse
programmingscheme;(b)Incremental‐step‐pulseprogrammingscheme.
Asuccessive‐pulseprogrammingisa methodofcontinuouslyapplyingdrainpulseswiththe
samevoltageasshowninFigure5a.Inthisprogrammingscheme,theamountofconductance
changeiscontrolledbythenumberofapplieddrainpulses.Whenthedrainpulseisapplied,the
signofthegatevoltagedetermineswhetherHEIorHHIoccurs.Ifthedrainpulseisappliedwhen
thegatebiasispositive(6V),HEIoccurs.Inthiscase,thethresholdvoltageincreasesbythetrapped
electronandtheconductancedecreases.Ontheotherhand,ifthedrainpulseisappliedwhenthe
gatebiasinnegative(−7V),HHIoccurs.Inthiscase,thethresholdvoltagedecreasesbythetrapped
holeandtheconductanceincreases.TheproposedunitsynapsecelliscomposedoftwoCTFdevices.
Consequently,thepotentiationoperationisconductedsimultaneouslybyHHIintheD(+)CTF
deviceandHEIintheD(−)CTFdevice.ThedepressionoperationisconductedbyHEIintheD(+)
deviceandHHIintheD(−)device.
TheISPPisusedfortheprogramschemeofNANDflashmemory[18].Theprogrampulseis
increasedbyaconstantvalueVstepaftereachprogramstep,asshowninFigure5b.Inourprevious
paper,onlysuccessive‐pulseprogrammingwasused.Inthiswork,weappliedtheISPPmethodto
theconductancemodulationofourdesignedsynapsedevice.UsingaTCADdevicesimulation,we
comparedtheconductancemodulationcharacteristicsofsuccessive‐pulseprogrammingandthe
ISPP.AsshowninFigure6,theISPPschemeshowsbettersynapticbehaviorthanthe
successive‐pulsescheme.TheISPPschemeshowedthattheconductancechangeslinearlyaccording
tothenumberofappliedpulses.Also,therangeofavailablesynapticweights(memorywindow)
canbefurtherincreased.Consequently,theISPPschemecanadjustthesynapticweightmore
accuratelythanthesuccessive‐pulseprogrammingschemeduringthelearningprocess.However,
theISPPschemealsohasadrawback.Inordertodeterminethestartpulsevoltage,averifyoperationis
requiredpriortoprogrammingtocheckthecurrentsynapticweightvalue.Therefore,theISPPscheme
canincreasetheaccuracyofthelearningprocess,butalsoincreasestimeandenergyconsumption.
Figure6.Gradualchangesofsynapticweightsbysuccessive‐pulseprogrammingand
incremental‐step‐pulseprogramming(ISPP).
Electronics2020,9,577of10
3.2.System‐LevelSimulationforPatternRecognition
Tovalidatethefunctionalityoftheproposedprogrammingschemes,thesingle‐layerartificial
neuralnetworksystemfortheModifiedNationalInstituteofStandardsandTechnology(MNIST)
patternrecognitionwassimulated.TheMNISTdatabaseisalargedatabaseofhandwrittendigits,
whichcontainsabout60,000learningimagesand10,000testimages[19].Atotalof784input
neuronsrepresent28×28pixelsofeachimageand10outputneuronsrepresent10digits(0~9).We
alsousedarectifierlinearunit(ReLU)asanactivationfunctionofneuron,whichisoneofthe
popularactivationfunctions[20].Forthelearningprocess,asupervisedlearningmethodwasused.
Atfirst,theerrorwascalculatedattheoutputneurons.Next,thetargetchangeinsynapticweight
(thenumberofprogrammingpulses)wasdeterminedbythegradientdescentmethod.Afterthat,
thesynapticweightvalueisupdatedbasedonfittedequationsfortheconductancemodulation
characteristicsofasuccessive‐pulseprogrammingschemeandtheISPPscheme.
Figure7ashowsthesystem‐levelsimulationresultofthepatternrecognitionaccuracywiththe
10,000testimagesamples.Comparedtoourpreviouswork[11],theISPPschemecanincrease
recognitionaccuracybyabout6%(asuccessive‐pulseprogrammingschemeinourpreviouswork:
79.83%[11],andtheISPPschemeinthiswork:85.9%).Thisresultisingoodagreementwiththe
otherpapersthatthelinearconductancemodulationcharacteristicisessentialforthebetter
performanceofneuromorphicsystems[5,21].Thesynapticweightmapsaftertraining10,000
sampleswiththeISPPschemeareillustratedinFigure7b.
Inaddition,weexaminedthesynapticweightmodulationcharacteristicsaccordingtothe
variousvaluesofVstepintheISPPscheme.AsillustratedinFigure8a,asmallerVstepallowsforfine
conductancemodulation,whichmeansthatthenumberofthesynapticweightlevelcanbe
increased.Asaresult,thefineconductancemodulationabilitywithasmallerVstepcanobtainmore
accuratepatternrecognitionrate,asshowninFigure8b.Itshouldbenoted,however,thattheretention
characteristics(theabilitytodistinguisheachlevelforalongtime)candeterioratewhentheinterval
betweeneachsynapticweightlevelbecomesnarrow.Therefore,themagnitudeofVstepshouldbe
determinedconsideringthetrade‐offrelationshipbetweentheretentioncharacteristicandtheaccuracy.
Figure7.ModifiedNationalInstituteofStandardsandTechnology(MNIST)patternrecognition
result.(a)Recognitionaccuracycomparisonbetweenasuccessive‐pulseprogrammingandtheISPP;
(b)Synapticweightmapaftertraining10,000sampleswiththeISPPscheme.
Electronics2020,9,578of10
Figure8.MNISTpatternrecognitionresultbyusingtheISPPscheme.(a)Thegradualconductance
changebyapplyingvariousVstep;(b)Recognitionaccuracyasafunctionofthetrainingsamplesfor
thevariousVstep.Thenumbermeanstheweightlevels(maximumpulsenumber).
4.Discussion
Currently,numerousresearchesbasedondifferenttypesofnonvolatilememorydeviceshave
beenconductedtoimplementneuromorphichardwaresystems.Table3summarizessomeofthe
researchresults.
Table3.Comparisonbetweenseveralresearchresultsofneuromorphicapplications.
ThisWorkPrevious
Work[11][22][23][24]
SynapseDeviceCTFCTFCTFRRAMPRAM
ArrayArchitecture3‐Darray3‐Darray2‐Darray2‐Darray2‐Darray
NeuronLayersingle‐layersingle‐layersingle‐layersingle‐layermulti‐layer
LearningTypesupervisedsupervisedsupervisedsupervisedunsupervised
RecognitionRate85.9%79.8%84%87.9%95.5%
ResultTypesimulationsimulationmeasurementmeasurementsimulation
Almostallpreviousstudiesarebasedonthe2‐Dsynapsearraystructure,butforthefirsttime
weproposedthe3‐Dstackedsynapsearraystructure.Thispaperhasaddressedseveralissues
associatedwiththedesignofthe3‐Dsynapsearrayarchitectureintermsofafull‐chiplevel.This
willbeanimportantguidelinefordesigninga3‐Dstackedsynapsearray.Theapproachofstacking
CTFdevicesisamaturetechnologythathasbeenalreadyusedincommercialized3‐DNANDflash
memories.Consequently,theproposed3‐Dsynapsearchitectureisexpectedtohaveahigh
possibilityofactualmassproduction.Also,itcanachieveexcellentreliabilitybyutilizingthe
varioustechnologiesusedinNANDflashmemory.Forexample,wehavedemonstratedthatthe
ISPPmethodcanimprovethepatternrecognitionaccuracyofaneuromorphicsystem.
Forfuturework,wewilldemonstratethesuperiorityoftheproposed3‐Dsynapsearchitecture
basedonanactualfabricatedarray.Inaddition,applicationresearchestovariousartificialneural
networkssuchasaconvolutionalneuralnetwork(CNN)andarecurrentneuralnetwork(RNN)
willbeacrucialtopic.
5.Conclusions
Weproposeda3‐DsynapsearrayarchitecturebasedonaCTFmemorydevice.Toresolvethe
drawbackofthepreviousversionofthearchitecture,agroundselectdecoderwasnewlyadded.
Also,weintroducedtheISPPschemetoimprovethelinearityoftheconductancemodulation.The
characteristicsofsynapticweightmodulationwascharacterizedusingaTCADdevicesimulation.
Inaddition,wedemonstratedthefeasibilityoftheproposedarchitectureforneuromorphicsystem
applicationsthroughaMATLABsimulationfortheMNISTpatternrecognition.Theproposed3‐D
synapsearrayarchitecturethatexhibitsacompactchipconfigurationandahigh‐integrationability
willbeapromisingtechnologythatcanrealizehardware‐basedneuromorphicsystems.
AuthorContributions:H.‐S.C.andY.Kdesignedthearchitecturedesignandwrotethemanuscript.Y.J.P.
performedthedevicesimulations.Y.K.confirmedthevaliditiesofthedesignedarchitectureandsimulated
synapticoperation.J.‐H.L.conceivedanddevelopedthevarioustypesof3‐Dsynapsestructures,initiatedthe
overallresearchproject.Allauthorshavereadandagreedtothepublishedversionofthemanuscript.
Funding:Thisworkwassupportedbythe2019ResearchFundoftheUniversityofSeoulforYoonKim.Also,
thisworkwassupportedbytheMOTIE(MinistryofTrade,Industry&Energy(10080583)andKSRC(Korea
SemiconductorResearchConsortium)supportprogramforthedevelopmentofthefuturesemiconductor
deviceforJong‐HoLee.
ConflictsofInterest:Theauthorsdeclarenoconflictofinterest.
Electronics2020,9,579of10
References
1. Yu,S.;Gao,B.;Fang,Z.;Yu,H.;Kang,J.;Wong,H.S.Alowenergyoxide‐basedelectronicsynapticdevice
forneuromorphicvisualsystemswithtolerancetodevicevariation.Adv.Mater.2013,25,1774–1779.
2. Liu,X.;Mao,M.;Liu,B.;Li,H.;Chen,Y.;Li,B.;Wang,Y.;Jiang,H.;Barnell,M.;Wu,Q.;etal.RENO:A
high‐efficientreconfigurableneuromorphiccomputingacceleratordesign.InProceedingsofthe2015
52ndACM/EDAC/IEEEDesignAutomationConference(DAC),SanFrancisco,CA,USA,8–12June2015;
pp.1–6.
3. Mead,C.Neuromorphicelectronicsystems.Proc.IEEE1990,78,1629–1636.
4. Burr,G.W.;Shelby,R.M.;Sebastian,A.;Kim,S.;Kim,S.;Sidler,S.;Virwani,K.;Ishii,M.;Narayanan,P.;
Fumarola,A.;etal.Neuromorphiccomputingusingnon‐volatilememory.Adv.Phys.X2016,2,89–124.
5. Choi,H.S.;Wee,D.H.;Kim,H.;Kim,S.;Ryoo,K.C.;Park,B.G.;Kim,Y.3‐DFloating‐GateSynapseArray
withSpike‐Time‐DependentPlasticity.IEEETrans.Electron.Devices2018,65,101–107.
6. Roberts,P.D.;Bell,C.C.Spiketimingdependentsynapticplasticityinbiologicalsystems.Biol.Cybern.
2002,87,392–403.
7. Akopyan,F.;Sawada,J.;Cassidy,A.;Alvarez‐Icaza,R.;Arthur,J.;Merolla,P.;Imam,N.;Nakamura,Y.;
Datta,P.;Nam,G.J.;etal.Truenorth:Designandtoolflowofa65mW1millionneuronprogrammable
neurosynapticchip.IEEETrans.Comput.AidedDes.Integr.CircuitsSyst.2015,34,1537–1557.
8. Yu,S.;Wu,Y.;Jeyasingh,R.;Kuzum,D.;Wong,H.S.AnElectronicSynapseDeviceBasedonMetalOxide
ResistiveSwitchingMemoryforNeuromorphicComputation.IEEETrans.Electron.Devices2011,58,
2729–2737.
9. Panwar,N.;Kumar,D.;Upadhyay,N.K.;Arya,P.;Ganguly,U.;Rajendran,B.Memristivesynaptic
plasticityinPr0.7Ca0.3MnO3RRAMbybio‐mimeticprogramming.InProceedingsofthe72ndDevice
ResearchConference,SantaBarbara,CA,USA,22‐25June2014;pp.135–136.
10. Diorio,C.;Hasler,P.;Minch,B.A.;Mead,C.A.Afloating‐gateMOSlearningarraywithlocallycomputed
weightupdates.IEEETrans.Electron.Devices1997,44,2281–2289.
11. Park,Y.J.;Kwon,H.T.;Kim,B.;Lee,W.J.;Wee,D.H.;Choi,H.S.;Park,B.G.;Lee,J.H.;Kim,Y.3‐DStacked
SynapseArrayBasedonCharge‐TrapFlashMemoryforImplementationofDeepNeuralNetworks.IEEE
Trans.Electron.Devices2019,66,420–427.
12. Lee,J.;Park,B.G.;Kim,Y.ImplementationofBooleanLogicFunctionsinChargeTrapFlashfor
In‐MemoryComputing.IEEEElectron.DeviceLett.2019,40,1358–1361.
13. Kim,Y.;Kang,M.Down‐couplingphenomenonoffloatingchannelin3DNANDflashmemory.IEEE
Electron.DeviceLett.2016,37,1566–1569.
14. Jeong,W.;Im,J.W.;Kim,D.H.;Nam,S.W.;Shim,D.K.;Choi,M.H.;Yoon,H.J.;Kim,D.H.;Kim,Y.S.;Park,
H.W.;etal.A128Gb3b/cellV‐NANDflashmemorywith1Gb/sI/Orate.IEEEJ.Solid‐StateCircuits2016,
51,204–212.
15. Kang,M.;Kim,Y.NaturalLocalSelf‐BoostingEffectin3DNANDFlashMemory.IEEEElectron.DeviceLett.
2017,38,1236–1239.
16. Canziani,A.;Paszke,A.;Culurciello,E.AnAnalysisofDeepNeuralNetworkModelsforPractical
Applications.arXiv2017,arXiv:1605.07678.
17. SentaurusDeviceUserGuide;Ver.I‐2013.12;Synopsys:MountainView,CA,USA,2012.
18. Kim,Y.;Seo,J.Y.;Lee,S.H.;Park,B.G.ANewProgrammingMethodtoAlleviatetheProgramSpeed
VariationinThree‐DimensionalStackedArrayNANDFlashMemory.JSTS2014,5,566–571.
19. LeCun,Y.;Bottou,L.;Bengio,Y.;Haffner,P.Gradient‐basedlearningappliedtodocumentrecognition.
Proc.IEEE1998,86,2278–2324.
20. Nair,V.;Hinton,G.E.RectifiedlinearunitsimproverestrictedBoltzmannmachines.InProceedingsofthe
27thInternationalConferenceonMachineLearning(ICML‐10),Haifa,Israel,June21–242010;pp.807–814.
21. Burr,G.W.;Shelby,R.M.;Sidler,S.;DiNolfo,C.;Jang,J.;Boybat,I.;Shenoy,R.S.;Narayanan,P.;Virwani,
K.;Giacometti,E.U.;etal.ExperimentalDemonstrationandTolerancingofaLarge‐ScaleNeuralNetwork
(165000Synapses)UsingPhase‐ChangeMemoryastheSynapticWeightElement.IEEETrans.Electron.
Devices2015,62,3498–3507.
22. Kim,H.;Hwang,S.;Park,J.;Park,B.G.Siliconsynaptictransistorforhardware‐basedspikingneural
networkandneuromorphicsystem.Nanotechnology2017,28,405202.
Electronics2020,9,5710of10
23. Kim,S.;Kim,H.;Hwang,S.;Kim,M.H.;Chang,Y.F.;Park,B.G.AnalogSynapticBehaviorofaSilicon
NitrideMemristor.ACSAppl.Mater.Interfaces2017,9,40420–40427.
24. Ambrogio,S.;Ciocchini,N.;Laudato,M.;Milo,V.;Pirovano,A.;Fantini,P.;Ielmini,D.Unsupervised
learningbyspiketimingdependentplasticityinphasechangememory(PCM)synapses.Front.Neurosci.
2016,10,56.
©2019bytheauthors.LicenseeMDPI,Basel,Switzerland.Thisarticleisanopenaccess
articledistributedunderthetermsandconditionsoftheCreativeCommons
Attribution(CCBY)license(http://creativecommons.org/licenses/by/4.0/).