Wednesday, 13 December 2017

Jk papper glidande medelvärde


Outsourcing ger företagen frihet att dumpa icke-viktiga, men viktiga sektorer av dess administration på företag specialiserade på det området. 1. Outsourcing frigör tid och resurser som gör att du kan fokusera på dina kärnverksamheter. 2. Outsourcing sparar pengar i lönekostnader. Utgifterna för en anställd bokförare inkluderar löner, betald ledighet, löneavgifter, arbetslöshetsskatter, ersättning för arbetstagarersättning och förmåner. Dessutom måste du tillhandahålla arbetsutrymme, kontorsmöbler, kontorsmaterial, programvara och datorer. Den genomsnittliga Here8217s varför du bör överväga outsourcing: företagare spenderar fem eller flera timmar per vecka som förvaltar bokföringspersonal. Genom att outsourca dina bokföringsfunktioner får du tjänster från en professionell till en bråkdel av kostnaden. 3. Outsourcing din bokföring är ett mycket effektivare sätt att organisera finanserna för skattemannen. Den kanadensiska inkomstbyrån är mycket mer benägna att acceptera en välrenommerad bokföringstjänstens uppfattning än en internrevisionsbedömning. Ett professionellt bokföringsföretag kan organisera register på ett sätt som de kommer att kunna förstå. Kortfattat talar bokföringsexperter CRA: s språk. De flesta företagare gör det inte heller, de har inte tid att lära sig. Att spara pengar på skatter och spara tid på potentiella revisioner är ett av de största sätten att spara pengar som småföretagare. En stor majoritet av småföretag som misslyckas gör det under en skattebelastnings vikt tillsammans med andra kostnader. Outsourced bookkeeping är en sann kostnadsminskning för småföretag. GHVA presenterar: Flytta ditt företag i rätt riktning med virtuell hjälp Du är inbjuden till en informativ nätverkssession om hur virtuella assistenter kan hjälpa till att lossa en del av stress och arbetsbelastning du möter varje dag kostnadseffektivt Janet Barclay, organiserad assistent Laurie Meyer, Succesfulla kontorslösningar Salma Burney, Virtual Girl Friday Jacquie Manore, Arbetsbelastningslösningstjänster Inc. Key Speaker: Herr Dave Howlett, grundare och verkställande direktör i RealHumanBeing. org presenterar en del av hans presentation Hur man kan ansluta (som ett verkligt människa) Herr . Hurtseminarier har lämnat tusentals människor inspirerade och fast beslutna att göra det rätta för sig själva, deras företag och sina barn. Han kommer att ge en 15 minuters del av hans berömda How To Connect-presentation. Kostnad: 20.00 vid dörren, eller förregistrera och spara 5,00 Priset inkluderar parkering och catering av Pepperwood. Goda bokföringsposter betyder att du har ett bra arkivsystem. Utan en har du inte den andra. Håll din bokföring uppdaterad. På försäljningssidan om du inte ger en faktura eller ett kvitto får du inte betalt. Inköp ska ske månatligen eller kvartalsvis för att matcha din GST-rapporteringsperiod. Don8217t lämnar det årligen bara för att det är din GST-rapporteringsperiod. Utmärkta skäl för att hålla din bokföring uppdaterad i min artikel Bookkeeping8230Why Bother. När du betalar en faktura 8211 registrerar du datum och betalningsmetod. Kontrollera om it8217s betalas med check eller det kreditkort som det betalades på. Om it8217s är en delbetalning - beloppet och datumet för varje betalning. Nu är informationen rätt till för att komma in i dina böcker. It8217 är en enkel sak men den informationen kan vara praktisk att ha 6 eller 12 månader på vägen. Få alltid kvitto - Kontantköp är svåra att hävda annars, och ja Tim Hortons ger dig kvitto om du frågar. Om intäkterna är så bleka eller krympta som gör att de då är oläsliga 8211 antar vad 8211 de inte kommer in i böckerna. Kreditkortsutlåtanden är inte alltid tillräckligt bevisa. Ett föremål som köpts på Wal-Mart kan vara något och det faktum att du köpt det med ditt visitkort visar inte att det är ett företagsavdrag. Gör detaljerade insättningssedlar och behåll en kopia. Senast kontrollerade jag bankerna fortfarande att ge ut gratis insättningsböcker. Eller köp en enkel anteckningsbok. Att hålla detaljerade register över varje insättning hjälper oss att matcha kundbetalningen till insättningen i kontoutdraget. Använd en kalender för att påminna dig om förfallodagar om you8217re spårar någon av följande skatter 8211 PST, GST, Lön, WCB, kvartalsinkomst. Att göra betalningar i tid kommer att hålla dig ur skattskulden med Kanada Revenue Agency. Läs mer om detta i min artikel Hur gick jag så djupt i skattskulden Smart Business Folk vet att tiden är pengar, genom att planera framåt. Organiserade poster gör livet mycket enklare för din bokförare om den personen är dig själv eller någon du betalar. I händelse av att du har kontakter med Intäkter Kanada kommer affärsverksamheten med organiserade poster att ha en mycket enklare tid än den som inte är. Enligt artikel 230 i inkomstskattelagen ska alla personer som bedriver verksamhet i Kanada och alla som är skyldiga att betala eller samla skatter, hålla böcker och register på deras hemort eller i hemvist, i Kanada, i ett sådant format eller för att möjliggöra bedömning och betalning av skatter. De flesta i affärer är medvetna om att det finns ett riktigt sätt att behålla böcker. För dem som inte är medvetna är det viktigt att inse att Inkomster Kanada har befogenhet att kräva att du behåller riktiga böcker. Goda bokföringsposter betyder att du har ett bra arkivsystem. Utan en har du inte den andra. Ställ in ett arkivsystem som du kan följa och använda det. Detta är förmodligen det första viktigaste steget för att hålla bra rekord. Enkla filsystem är lätta att installera och underhålla. GST Quarterly Filers Din GST-avkastning för AprilMayJune 2008 beror 31 juli 2008. Hur vet jag om I8217m en kvartalsfiler Hämta din GST-blankett med namnet 8220Goods och Services TaxHarmonized Sales Tax (GSTHST) Retur för Registrants8221. De viktigaste uppgifterna du behöver är de tre rutorna högst upp till höger på sidan 1. Den första rutan visar förfallodagen för din betalning, den andra rutan visar ditt kontonummer och den tredje rutan visar rapporteringsperioden. Eller du kan vara en årlig fil. I rutan för rapporteringsperiod får du information om datumintervallet för din uppsägningsperiod. Hur mycket behöver jag betala Organisera dina försäljningskvitton för att beräkna GST som samlas på försäljningen. Från och med 1 januari 2008 är GST-kursen 5. Samla in och organisera dina företagsinkomster för att beräkna GST-utbetalningen vid köp. Subtrahera GSTPurchases från GSTSales och överför skillnaden till mottagaren General. (I8217m förutsatt att försäljningen var större än inköp.) Om dina GSTPurchases är större än GSTSales kan du få återbetalning, men allt beror på. Det finns alltid undantag från regeln. Numera finns det många sätt att göra din betalning. Du kan - - näta maila en check - visa din lokala bank - online banking - GST Netfile cra-arc. gc. camenu-e. html - GST-telefon cra-arc. gc. camenu-e. html Skicka din betalning i tid. Mottagarens generalsekreterare är mycket oförsonlig för sent och kommer att tillämpa påföljder och räntekostnader som sammansätts dagligen. Klicka på den här länken till webbplatsen för Kanada Revenue Agency för allt du någonsin velat veta om GST. cra-arc. gc. cataxbusinesstopicsgstmenu-e. html Höj din hand om you8217ve började arbeta med 2008 bokföring. Utmärkt Och resten av er. Vad väntar du på Varför vänta till 30 april för att se resultaten från årets arbete. Genom att börja nu kan du skapa ett resultatförlustförklaring som visar om you8217ve har gjort eller förlorat pengar och hur du spenderade den. Den här rapporten är en fantastisk information som kan hjälpa dig mer nu än senare. Använder du en bokhållares tjänster eller gör du det själv Vi bär många hattar när vi försöker köra vår verksamhet och kanske vi har för många. Om du kämpar med bokföringen, och jag vet att det inte är en trevlig uppgift, kanske du borde överväga att få lite hjälp. De flesta professionella bokförare kommer att erbjuda outsourcing av arbetsutbildningen i användningen av programvaran eller hjälpa till att ta reda på vilken utgiftskategori som ska användas. Utdrag från en hembaserad affärsartikel-Don8217t förbiser managementbookkeeping. Brist på ledarskapskompetens är en av de enskilt största orsakerna till affärsfel. Ta kurser, sök expertråd eller hyr hjälp, men lär dig grundläggande kunskaper innan du börjar. canadabusiness. caservletContentServerpagenameCBSCFEdisplayampcGuideFactSheetampcid1081945277281en Naturligtvis behöver du en typ av system för inspelning av allt. Detta kan vara ett bokföringsprogram, kalkylblad eller pappersbaserad. I kommentarfältet låt mig veta vilken typ av system du använder för din bokföring. I8217d gillar verkligen att veta. I en framtida artikel I8217ll postar mina resultat tillsammans med information om de olika systemen. Den kanadensiska Bookkeepers Association (CBA) är en nationell, ideell organisation som är engagerad i utvecklingen av professionella bokförare. Medlemskap i CBA ger bokförare resurserna för att lyckas i en ständigt föränderlig miljö. Vår förening skapar excellens genom kunskap och växer snabbt, vilket representerar en omfattande ekonomisk förvaltningsstrategi för företag för alla storlekar. Vårt medlemskap växer snabbt varje dag och representerar bokförare i majoriteten av Kanada8217s provinser och territorier. Vår MISSION inkluderar: Att främja, stödja, ge upp och uppmuntra kanadensiska bokförare. Att främja och öka medvetenheten om bokföring i Kanada som en professionell disciplin. Att stödja nationella, regionala och lokala nätverk mellan kanadensiska bokförare. Att ge information om avancerade rutiner, utbildning och teknik som förbättrar branschen, samt den kanadensiska bokföringsprofessorn. Att stödja och uppmuntra ansvarsfull och korrekt bokföring i hela Kanada. Vi är engagerade i tillväxt som gynnar våra medlemmar och bokföring i Kanada som en professionell disciplin. Våra mål inkluderar framsteg inom distansutbildning, certifiering av bokförare och regionala kapitel. Vi uppskattar förslag som förbättrar webbplatsen och föreningen. Vi lyssnar och uppskattar din insats Vi arbetar med att beteckna bokhållare i Kanada. Beteckningen kommer att vara 8220Certified Professional Bookkeeper8221 Den kanadensiska Bookkeepers Association var formellt känd som den kanadensiska Bookkeepers Alliance. CBA började acceptera medlemmar i början av 2003. 9 februari 2004 var den kanadensiska bokföringsförbundet införlivad som en ideell förening. Medlemskapstillväxten har överskridit vad som ursprungligen förväntades. Vi är stolta över associeringens tillväxt. Vi har vuxit med varje milstolpe till den nationella ideella organisationen vi är idag med medlemmar i nästan varje provins och territorium. Använda neurala nät för att känna igen handskrivna siffror Perceptroner Sigmoid-neuroner Arkitekturen hos neurala nätverk Ett enkelt nätverk för att klassificera handskrivna siffror Att lära sig med gradient nedstigning Implementera vårt nätverk för att klassificera siffror Mot djupt lärande Hur backpropagationsalgoritmen fungerar Uppvärmning: en snabb matrisbaserad metod för att beräkna utmatningen från ett neuralt nätverk De två antagandena vi behöver om kostnadsfunktionen Hadamard-produkten, De fyra grundläggande ekvationerna bakom backpropagation Bevis för de fyra grundläggande ekvationerna (frivilligt) Bakpropagationsalgoritmen Koden för backpropagation I vilken bemärkelse är backpropagation en snabb algoritm Backpropagation: den stora bilden Förbättra hur neurala nätverk lär sig Kors-entropi-kostnadsfunktionen Överfitting och regularisering Viktinitiering Handskrift känner igen ition revisited: koden Hur man väljer neurala nätverk hyperparametrar Andra tekniker Ett visuellt bevis på att neurala nät kan beräkna vilken funktion som helst Två överordnade universiteter med en ingång och en utgång Många inmatningsvariabler Förlängning bortom sigmoid-neuroner Fastställande av stegfunktioner Slutsats Varför är djupt neurala nätverk svårt att träna Det försvinnande gradientproblemet Det som orsakar det försvinnande gradientproblemet Otabla gradienter i djupa neurala nät Ostabila gradienter i mer komplexa nätverk Övriga hinder för djupt lärande Djupt lärande Introduktion av fängelsessnät Konvolutionella neurala nätverk i praktiken Koden för våra fängelsessnät Nya framsteg i bildigenkänning Andra tillvägagångssätt för djupa neurala nätverk På framtiden för neurala nätverk Bilaga: Finns det en enkel algoritm för intelligens Tack till alla anhängare som gjorde boken möjlig, med speciellt tack till Pavel Dudrenov. Tack också till alla bidragsgivare till Bugfinder Hall of Fame. Deep Learning. bok av Ian Goodfellow, Yoshua Bengio och Aaron Courville I det sista kapitlet såg vi hur neurala nätverk kan lära sig deras vikter och förspänningar med hjälp av gradient nedstigningsalgoritmen. Det fanns emellertid ett gap i vår förklaring: vi diskuterade inte hur man beräknar gradienten av kostnadsfunktionen. Det är ganska ett gap I det här kapitlet förklarar jag en snabb algoritm för att beräkna sådana gradienter, en algoritm som kallas backpropagation. Backpropagationsalgoritmen introducerades ursprungligen på 1970-talet, men dess betydelse var inte fullt uppskattad förrän ett känt 1986-papper av David Rumelhart. Geoffrey Hinton. och Ronald Williams. Det papperet beskriver flera neurala nätverk där backpropagation fungerar långt snabbare än tidigare metoder för lärande, vilket gör det möjligt att använda neurala nät för att lösa problem som tidigare varit olösliga. Idag är backpropagationsalgoritmen arbetshästen för att lära sig i neurala nätverk. Detta kapitel är mer matematiskt inblandat än resten av boken. Om du inte är galen på matematiken kan du frestas att hoppa över kapitlet och att behandla backpropagation som en svart låda vars detaljer du är villig att ignorera. Varför ta tid att studera dessa detaljer Anledningen är givetvis förståelse. I hjärtat av backpropagation är ett uttryck för partiell derivat partiell C partiell w av kostnadsfunktionen C med avseende på vilken vikt som helst (eller bias b) i nätverket. Uttrycket berättar hur snabbt kostnaden förändras när vi ändrar vikterna och förskjutningarna. Och medan uttrycket är något komplext, har det också en skönhet, med varje element som har en naturlig, intuitiv tolkning. Och så backpropagation är inte bara en snabb algoritm för lärande. Det ger oss faktiskt detaljerade insikter om hur byte av vikter och förspänningar ändrar nätets övergripande beteende. Det är väl värt att studera i detalj. Med det sagt, om du vill skumma på kapitlet eller hoppa direkt till nästa kapitel, så är det bra. Ive har skrivit resten av boken för att vara tillgänglig även om du behandlar backpropagation som en svart låda. Det finns givetvis poäng senare i boken där jag hänvisar till resultaten från det här kapitlet. Men på dessa punkter borde du fortfarande kunna förstå de viktigaste slutsatserna, även om du inte följer alla resonemang. Innan vi diskuterar backpropagation, låt oss värma upp med en snabbmatrisbaserad algoritm för att beräkna utmatningen från ett neuralt nätverk. Vi såg faktiskt redan kort denna algoritm nära slutet av det sista kapitlet. men jag beskrev det snabbt, så det är värt att återgå i detalj. I synnerhet är detta ett bra sätt att bli bekväm med notationen som används i backpropagation, i ett välkänt sammanhang. Låt oss börja med en notation som låter oss hänvisa till vikter i nätverket på ett entydigt sätt. Använd väl wl för att ange vikten för anslutningen från k-neuronen i (l-1) skiktet till j-neuronen i l-skiktet. Så till exempel visar diagrammet nedan vikten på en anslutning från den fjärde neuronen i det andra skiktet till den andra neuronen i det tredje skiktet i ett nätverk: Denna notation är besvärlig först och det tar lite arbete att behärska. Men med lite ansträngning kommer du att hitta notationen blir lätt och naturlig. En aning av notationen är beställningen av j - och k-indexen. Du kanske tror att det är mer meningsfullt att använda j för att hänvisa till den ingående neuronen, och k till utgångssignalen, inte vice versa, som det faktiskt är gjort. Jag förklarar orsaken till det här quirket nedan. Vi använder en liknande notation för nätverksförskjutningar och aktiveringar. Uppenbarligen använder vi blj för bias av j-neuronen i l-laget. Och vi använder alj för aktiveringen av j neuronen i l-laget. Följande diagram visar exempel på dessa noteringar som används: Med dessa noteringar är aktiveringen aj av j-neuronen i l-skiktet relaterad till aktiveringarna i (l-1) skiktet genom ekvationen (jämför ekvation (4) börjar frac icke-numrerande och omgivande diskussion i det sista kapitlet) börja aj sigmaleft (summ wak blj right), tag slutet där summan över alla neuroner k är i (l-1) - laget. För att skriva om detta uttryck i en matrisform definierar vi en viktmatris wl för varje lager, l. Inmatningarna av viktmatrisen wl är bara vikterna som förbinder till l-skiktet av neuroner, det vill säga posten i j-raden och k-kolumnen är wl. På samma sätt definierar vi för varje lager l en biasvektor. bl. Du kan nog gissa hur det här fungerar - komponenterna i biasvektorn är bara värdena blj, en komponent för varje neuron i l-laget. Och slutligen definierar vi en aktiveringsvektor al vars komponenter är aktiveringarna alj. Den sista ingrediensen som vi behöver skriva om (23) börjar en j sigmaleft (sumk w a k blj right), som inte är nummerberäknad i en matrisform, är idén att vektorisera en funktion som sigma. Vi mötte vektorisering kort i det sista kapitlet, men för att återskapa, är tanken att vi vill tillämpa en funktion som sigma till varje element i en vektor v. Vi använder den uppenbara noteringssymmaen (v) för att beteckna denna typ av elementär applikation av en funktion. Det vill säga, komponenterna i sigma (v) är bara sigma (v) j sigma (vj). Om vi ​​till exempel har funktionen f (x) x2 har den vektoriserade formen av f effekten börjar flöjt (vänster börja 2 3 sluta höger höger) vänster börja f (2) f (3) sluta höger vänster börja 4 9 avsluta höger, märkänd som är, den vektoriserade f rutan kvadrerar varje element i vektorn. Med dessa noteringar i åtanke, börjar Equation (23) en j sigmaleft (sumk w a k blj right) icke-numrerande kan skrivas om i den vackra och kompakta vektoriserade formen börjar en sigma (wl a bl). tag end Detta uttryck ger oss ett mycket mer globalt sätt att tänka på hur aktiveringarna i ett lager relaterar till aktiveringar i föregående lager: vi applicerar bara viktmatrisen till aktiveringarna, lägger sedan till biasvektorn och använder slutligen sigmafunktionen Förresten, det är detta uttryck som motiverar quirken i den tidigare nämnda noteringen. Om vi ​​använde j för att indexera den ingående neuronen, och k för att indexera utgående neuron, måste vi ersätta viktmatrisen i ekvation (25) starta en sigma (wl a bl) som inte är uppdelad av transponeringen av viktmatrisen. Det är en liten förändring, men irriterande, och vi förlorar den enkla enkelheten att säga (och tänk) tillämpa viktmatrisen på aktiveringarna. Den globala uppfattningen är ofta enklare och mer kortfattad (och involverar färre index) än neuron-by-neuron synvinkel vi har tagit till nu. Tänk på det som ett sätt att flyga index helvetet, medan du fortfarande är exakt vad som händer. Uttrycket är också användbart i praktiken, eftersom de flesta matrisbiblioteken tillhandahåller snabba sätt att implementera matrismultiplicering, vektortillägg och vektorisering. Koden i det sista kapitlet gjorde faktiskt implicit användning av detta uttryck för att beräkna nätets beteende. När du använder ekvation (25) börjar en sigma (wl a bl) som inte är nummerberäkning för att beräkna al, vi beräknar den mellanliggande kvantiteten zl equiv wl a bl längs vägen. Denna kvantitet visar sig vara användbar nog att vara värd att namngivna: vi kallar zl den viktade ingången till neuronerna i lager l. Tja gör betydande användningen av den viktade inmatningen zl senare i kapitlet. Ekvation (25) börjar en sigma (wl a bl) nonumberend skrivs ibland i form av den viktade ingången, som al sigma (zl). Det är också värt att notera att zl har komponenter zlj sumk wl a kblj, det vill säga zlj är bara den viktade ingången till aktiveringsfunktionen för neuron j i lager l. Målet med backpropagation är att beräkna partiella derivaten partiella C partiella w och partiella C partiella b av kostnadsfunktionen C med avseende på vilken vikt som helst w eller bias b i nätverket. För backpropagation till arbete måste vi göra två huvudantaganden om form av kostnadsfunktionen. Innan dessa antaganden anges, är det dock användbart att ha en exemplarkostnadsfunktion i åtanke. Använd väl den kvadratiska kostnadsfunktionen från det sista kapitlet (c. f. ekvation (6) börjar C (w, b) equiv frac sumx y (x) - a2 icke-numrerande). I noteringen av den sista sektionen har kvadratkostnaden formuläret börjar C frac sumx y (x) - aL (x) 2, taggänd där: n är det totala antalet träningsexemplar summan över individuella träningsexempel, xyy (x) är motsvarande önskad utgång L betecknar antalet skikt i nätverket och aL aL (x) är vektorn av aktiveringar som matas ut från nätverket när x matas in. Okej, så vilka antaganden behöver vi göra om vår kostnadsfunktion, C, så att backpropagation kan tillämpas. Det första antagandet vi behöver är att kostnadsfunktionen kan skrivas som en genomsnittlig C frac sumx Cx över kostnadsfunktionerna Cx för enskilda träningsexempel, x. Det här är fallet för den kvadratiska kostnadsfunktionen, där kostnaden för ett enda träningsexempel är Cx frac y-aL 2. Detta antagande kommer också att gälla för alla andra kostnadsfunktioner som väl möts i den här boken. Anledningen till att vi behöver detta antagande beror på att vilken återfördelning faktiskt låter oss göra är att beräkna partiella derivatpartierna Cx partial w och partial Cx partial b för ett enda träningsexempel. Vi återhämtar då partiell C partiell w och partiell C partiell b genom att medeltala över träningsexemplen. Faktum är att med antagandet i åtanke, anta att träningsexemplet x har blivit fixat och släpp x-prenumerationen, skriv kostnaden Cx som C. Tja så småningom sätta x in igen, men för nu är det en notationell olägenhet som är bättre lämnade implicit. Det andra antagandet vi gör om kostnaden är att det kan skrivas som en funktion av utgångarna från det neurala nätverket. Till exempel uppfyller den kvadratiska kostnadsfunktionen detta krav, eftersom den kvadratiska kostnaden för ett enda träningsexempel x kan skrivas som börja C frac y-aL2 frac sumj (yj-aLj) 2, taggänd och sålunda är en funktion av utgångsaktiveringarna. Självklart beror denna kostnadsfunktion också på önskad utgång y, och du kanske undrar varför inte heller kostnaden gällde y som en funktion. Kom ihåg att insatsutbildningsexemplet x är fixerat, och så är utgången y också en fast parameter. I synnerhet är det inte något vi kan ändra genom att ändra vikterna och förspänningarna på något sätt, det vill säga det är inte något som det neurala nätverket lär sig. Och så är det meningsfullt att betrakta C som en funktion av utgångsaktiveringarna aL enbart, med y bara en parameter som hjälper till att definiera den funktionen. Backpropagationsalgoritmen är baserad på gemensamma linjära algebraiska operationer - saker som vektoraddition, multiplicering av en vektor med en matris, och så vidare. Men en av operationerna är lite mindre vanligt förekommande. Antag särskilt att s och t är två vektorer av samma dimension. Då använder vi s odot t för att beteckna den elementvisa produkten av de två vektorerna. Således är komponenterna i s odot t bara (s odot t) j sj tj. Till exempel börja vänsterbegin 1 2 sluta höger odot leftbegin 3 4end höger vänster börja 1 3 2 4 sluta höger vänster börja 3 8 sluta höger. tag end Denna typ av elementvis multiplikation kallas ibland Hadamard produkt eller Schur produkt. Tänk på det som Hadamard-produkten. Bra matrisbibliotek tillhandahåller vanligtvis snabb implementering av Hadamard-produkten, och det är till nytta när man implementerar backpropagation. Backpropagation handlar om att förstå hur byte av vikter och förspänningar i ett nätverk ändrar kostnadsfunktionen. I slutändan betyder detta att man beräknar partiella derivatpartiella C partiella wl och partiella C partiella blj. Men för att beräkna dessa introducerar vi först en mellankvantitet, deltalj, som vi kallar felet i j-neuronen i l-laget. Backpropagation kommer att ge oss ett förfarande för att beräkna felet deltalj, och då kommer vi att relatera deltalj till partial C partial wl och partial C partial blj. För att förstå hur felet är definierat, föreställ dig att det finns en demon i vårt neurala nätverk: Demonen sitter vid j-neuronen i lager l. När ingången till neuron kommer in, demonen suddar med neuronoperationen. Det lägger till en liten förändring Delta zlj till den neuronviktade ingången, så att neuron istället utmatar sigma (zljDelta zlj) istället för att utföra sigma (zlj). Denna förändring sprids genom senare lager i nätverket, vilket slutligen medför att den totala kostnaden förändras med ett antal frac Delta zlj. Nu är den här demonen en god demon och försöker hjälpa dig att förbättra kostnaden, dvs de försöker hitta en Delta zlj vilket gör att kostnaden blir mindre. Antag att frac har ett stort värde (antingen positivt eller negativt). Då demonen kan sänka kostnaden ganska lite genom att välja Delta zlj för att få motsatt tecken till frac. Däremot, om frac ligger nära noll, kan demonen inte förbättra kostnaden mycket, genom att störa den viktade inmatningen zlj. Såvitt demonen kan säga, är neuronen redan ganska nära optimalt. Det här är bara fallet för små förändringar Delta zlj, förstås. Tänk på att demonen är tvungen att göra sådana små förändringar. Och så är det en heuristisk känsla där frac är ett mått på felet i neuron. Motiverad av denna historia definierar vi felet deltalj av neuron j i lager l genom att börja deltalj equiv frac. tag end Som enligt våra vanliga konventioner använder vi deltal för att ange vektorn av fel som hör samman med lagret l. Backpropagation kommer att ge oss ett sätt att beräkna deltal för varje lager och sedan relatera dessa fel till kvantiteterna av verkligt intresse, partiell C partiell wl och partial C partial blj. Du kanske undrar varför demonen förändrar den viktade inmatningen zlj. Det är säkert mer naturligt att föreställa sig demonen som ändrar utgångsaktiveringen alj, med det resultat att vi använder frac som vår mått på fel. Faktum är att, om du gör det här, fungerar det på samma sätt som diskussionen nedan. Men det visar sig att göra presentationen av backpropagation lite mer algebraiskt komplicerad. Så bra sticka med deltalj frac som vår felmått I klassificeringsproblem som MNIST används begreppet fel ibland för klassificeringsfel. T. ex. Om det neurala nätet korrekt klassificerar 96,0 procent av siffrorna, är felet 4,0 procent. Självklart har detta en helt annan betydelse från våra deltavektorer. I praktiken borde du inte ha problem med att säga vilken mening som är avsedd för någon given användning. Anfallsplan: Backpropagation är baserad på fyra grundläggande ekvationer. Tillsammans ger dessa ekvationer oss ett sätt att beräkna både feldeltalet och gradienten av kostnadsfunktionen. Jag anger de fyra ekvationerna nedan. Varnas, men du bör inte förvänta dig att omedelbart assimilera ekvationerna. En sådan förväntan kommer att leda till besvikelse. Faktum är att backpropagationsekvationerna är så rika att man förstår dem väl kräver stor tid och tålamod när man gradvis gräver djupare in i ekvationerna. Den goda nyheten är att sådant tålamod återbetalas många gånger över. Och så är diskussionen i detta avsnitt bara en början som hjälper dig på vägen till en grundlig förståelse av ekvationerna. Här är en förhandsvisning av de vägar som dyker djupt in i ekvationerna senare i kapitlet: Jag ger ett kort bevis på ekvationerna. som hjälper till att förklara varför de är sanna, återställer ekvationerna i algoritmisk form som pseudokod och ser hur pseudokoden kan implementeras som riktig, kör Python-kod och i sista delen av kapitlet. utveckla väl en intuitiv bild av vad backpropagationsekvationerna betyder och hur någon kan upptäcka dem från början. Längs vägen återkommer väl upprepade gånger till de fyra grundläggande ekvationerna, och när du fördjupa din förståelse kommer dessa ekvationer att komma att verka bekväma och kanske även vackra och naturliga. En ekvation för felet i utgångslaget, deltaL: Komponenterna av deltaL ges genom att börja deltaLj frac sigma (zLj). tag end Detta är ett mycket naturligt uttryck. Den första termen till höger, partiell C partiell aLj, mäter bara hur snabbt kostnaden förändras som en funktion av j-utgångsaktiveringen. Om till exempel C inte beror mycket på en viss utdata-neuron, j, då är deltaLj liten, vilket är vad vi förväntar oss. Den andra termen till höger, sigma (zLj), mäter hur snabbt aktiveringsfunktionen sigma ändras vid zLj. Observera att allt i (BP1) börjar deltaLj frac sigma (zLj) nonumberend enkelt beräknas. I synnerhet beräknar vi zLj när vi beräknar nätverksbeteendet, och det är bara en liten extra kostnad för att beräkna sigma (zLj). Den exakta formen av partiell C partiell aLj kommer givetvis att bero på kostnadsfunktionens form. Men förutsatt att kostnadsfunktionen är känd bör det vara liten problem att beräkna partiell C partiell aLj. Om exempelvis användes den kvadratiska kostnadsfunktionen, då C frac sumj (yj-aLj) 2 och så partiell C partiell aLj (ajL-yj), vilket uppenbarligen lätt kan beräknas. Ekvation (BP1) börjar deltaLj frac sigma (zLj) nonumberend är ett komponentvis uttryck för deltaL. Det är ett perfekt bra uttryck, men inte den matrisbaserade formen vi vill ha för återuppbyggnad. Det är emellertid lätt att skriva om ekvationen i en matrisbaserad form, som börjar deltaL nablaa C odot sigma (zL). märkänd Här definieras nablaa C att vara en vektor vars komponenter är partiella derivatpartiella C partiella aLj. Du kan tänka på nabla C som uttrycker förändringshastigheten för C med avseende på utgångsaktiveringarna. Det är lätt att se att ekvationer (BP1a) börjar deltaL nablaa C odot sigma (zL) nonumberend och (BP1) börjar deltaLj frac sigma (zLj) nonumberend är likvärdiga, och därför börjar väl användning (BP1) deltaLj frac sigma (zLj) nonumberend utbytbart för att referera till båda ekvationerna. Som exempel, när det gäller den kvadratiska kostnaden vi har nablaa C (aL-y), och så börjar den fullständigt matrisbaserade formen av (BP1) deltaLj frac sigma (zLj) icke-numrerad börjar deltaL (aL-y) odot sigma (zL). tagänden Som du kan se har allt i detta uttryck en bra vektorform och kan enkelt beräknas med ett bibliotek som Numpy. En ekvation för feldeltalet med avseende på felet i nästa lager, delta: I synnerhet börjar deltal ((w) T delta) odot sigma (zl), märkänd där (w) T är transponeringen av viktmatrisen w för (l1) skiktet. Denna ekvation verkar komplicerad, men varje element har en fin tolkning. Antag att vi känner till felet delta på l1-skiktet. När vi tillämpar transponeringsviktmatrisen, (w) T, kan vi intuitivt tänka på detta för att flytta felet bakåt genom nätverket, vilket ger oss någon form av mätning av felet vid utgången av l-laget. Vi tar sedan Hadamardprodukten odot sigma (zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017Using neural nets to recognize handwritten digits Perceptrons Sigmoid neurons The architecture of neural networks A simple network to classify handwritten digits Learning with gradient descent Implementing our network to classify digits Toward deep learning How the backpropagation algorithm works Warm up: a fast matrix-based approach to computing the output from a neural network The two assumptions we need about the cost function The Hadamard product, s odot t The four fundamental equations behind backpropagation Proof of the four fundamental equations (optional) The backpropagation algorithm The code for backpropagation In what sense is backpropagation a fast algorithm Backpropagation: the big picture Improving the way neural networks learn The cross-entropy cost function Overfitting and regularization Weight initialization Handwriting recognition revisited: the code How to choose a neural networks hyper-parameters Other techniques A visual proof t hat neural nets can compute any function Two caveats Universality with one input and one output Many input variables Extension beyond sigmoid neurons Fixing up the step functions Conclusion Why are deep neural networks hard to train The vanishing gradient problem Whats causing the vanishing gradient problem Unstable gradients in deep neural nets Unstable gradients in more complex networks Other obstacles to deep learning Deep learning Introducing convolutional networks Convolutional neural networks in practice The code for our convolutional networks Recent progress in image recognition Other approaches to deep neural nets On the future of neural networks Appendix: Is there a simple algorithm for intelligence Thanks to all the supporters who made the book possible, with especial thanks to Pavel Dudrenov. Thanks also to all the contributors to the Bugfinder Hall of Fame. Deep Learning. book by Ian Goodfellow, Yoshua Bengio, and Aaron Courville In the last chapter we saw how neural networks can learn their weights and biases using the gradient descent algorithm. There was, however, a gap in our explanation: we didnt discuss how to compute the gradient of the cost function. Thats quite a gap In this chapter Ill explain a fast algorithm for computing such gradients, an algorithm known as backpropagation . The backpropagation algorithm was originally introduced in the 1970s, but its importance wasnt fully appreciated until a famous 1986 paper by David Rumelhart. Geoffrey Hinton. and Ronald Williams. That paper describes several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks. This chapter is more mathematically involved than the rest of the book. If youre not crazy about mathematics you may be tempted to skip the chapter, and to treat backpropagation as a black box whose details youre willing to ignore. Why take the time to study those details The reason, of course, is understanding. At the heart of backpropagation is an expression for the partial derivative partial C partial w of the cost function C with respect to any weight w (or bias b) in the network. The expression tells us how quickly the cost changes when we change the weights and biases. And while the expression is somewhat complex, it also has a beauty to it, with each element having a natural, intuitive interpretation. And so backpropagation isnt just a fast algorithm for learning. It actually gives us detailed insights into how changing the weights and biases changes the overall behaviour of the network. Thats well worth studying in detail. With that said, if you want to skim the chapter, or jump straight to the next chapter, thats fine. Ive written the rest of the book to be accessible even if you treat backpropagation as a black box. There are, of course, points later in the book where I refer back to results from this chapter. But at those points you should still be able to understand the main conclusions, even if you dont follow all the reasoning. Before discussing backpropagation, lets warm up with a fast matrix-based algorithm to compute the output from a neural network. We actually already briefly saw this algorithm near the end of the last chapter. but I described it quickly, so its worth revisiting in detail. In particular, this is a good way of getting comfortable with the notation used in backpropagation, in a familiar context. Lets begin with a notation which lets us refer to weights in the network in an unambiguous way. Well use wl to denote the weight for the connection from the k neuron in the (l-1) layer to the j neuron in the l layer. So, for example, the diagram below shows the weight on a connection from the fourth neuron in the second layer to the second neuron in the third layer of a network: This notation is cumbersome at first, and it does take some work to master. But with a little effort youll find the notation becomes easy and natural. One quirk of the notation is the ordering of the j and k indices. You might think that it makes more sense to use j to refer to the input neuron, and k to the output neuron, not vice versa, as is actually done. Ill explain the reason for this quirk below. We use a similar notation for the networks biases and activations. Explicitly, we use blj for the bias of the j neuron in the l layer. And we use alj for the activation of the j neuron in the l layer. The following diagram shows examples of these notations in use: With these notations, the activation a j of the j neuron in the l layer is related to the activations in the (l-1) layer by the equation (compare Equation (4) begin frac nonumberend and surrounding discussion in the last chapter) begin a j sigmaleft( sumk w a k blj right), tag end where the sum is over all neurons k in the (l-1) layer. To rewrite this expression in a matrix form we define a weight matrix wl for each layer, l. The entries of the weight matrix wl are just the weights connecting to the l layer of neurons, that is, the entry in the j row and k column is wl . Similarly, for each layer l we define a bias vector . bl. You can probably guess how this works - the components of the bias vector are just the values blj, one component for each neuron in the l layer. And finally, we define an activation vector al whose components are the activations alj. The last ingredient we need to rewrite (23) begin a j sigmaleft( sumk w a k blj right) nonumberend in a matrix form is the idea of vectorizing a function such as sigma. We met vectorization briefly in the last chapter, but to recap, the idea is that we want to apply a function such as sigma to every element in a vector v. We use the obvious notation sigma(v) to denote this kind of elementwise application of a function. That is, the components of sigma(v) are just sigma(v)j sigma(vj). As an example, if we have the function f(x) x2 then the vectorized form of f has the effect begin fleft(left begin 2 3 end right right) left begin f(2) f(3) end right left begin 4 9 end right, tag end that is, the vectorized f just squares every element of the vector. With these notations in mind, Equation (23) begin a j sigmaleft( sumk w a k blj right) nonumberend can be rewritten in the beautiful and compact vectorized form begin a sigma(wl a bl). tag end This expression gives us a much more global way of thinking about how the activations in one layer relate to activations in the previous layer: we just apply the weight matrix to the activations, then add the bias vector, and finally apply the sigma function By the way, its this expression that motivates the quirk in the wl notation mentioned earlier. If we used j to index the input neuron, and k to index the output neuron, then wed need to replace the weight matrix in Equation (25) begin a sigma(wl a bl) nonumberend by the transpose of the weight matrix. Thats a small change, but annoying, and wed lose the easy simplicity of saying (and thinking) apply the weight matrix to the activations. That global view is often easier and more succinct (and involves fewer indices) than the neuron-by-neuron view weve taken to now. Think of it as a way of escaping index hell, while remaining precise about whats going on. The expression is also useful in practice, because most matrix libraries provide fast ways of implementing matrix multiplication, vector addition, and vectorization. Indeed, the code in the last chapter made implicit use of this expression to compute the behaviour of the network. When using Equation (25) begin a sigma(wl a bl) nonumberend to compute al, we compute the intermediate quantity zl equiv wl a bl along the way. This quantity turns out to be useful enough to be worth naming: we call zl the weighted input to the neurons in layer l. Well make considerable use of the weighted input zl later in the chapter. Equation (25) begin a sigma(wl a bl) nonumberend is sometimes written in terms of the weighted input, as al sigma(zl). Its also worth noting that zl has components zlj sumk wl a kblj, that is, zlj is just the weighted input to the activation function for neuron j in layer l. The goal of backpropagation is to compute the partial derivatives partial C partial w and partial C partial b of the cost function C with respect to any weight w or bias b in the network. For backpropagation to work we need to make two main assumptions about the form of the cost function. Before stating those assumptions, though, its useful to have an example cost function in mind. Well use the quadratic cost function from last chapter (c. f. Equation (6) begin C(w, b) equiv frac sumx y(x) - a2 nonumberend ). In the notation of the last section, the quadratic cost has the form begin C frac sumx y(x)-aL(x)2, tag end where: n is the total number of training examples the sum is over individual training examples, x y y(x) is the corresponding desired output L denotes the number of layers in the network and aL aL(x) is the vector of activations output from the network when x is input. Okay, so what assumptions do we need to make about our cost function, C, in order that backpropagation can be applied The first assumption we need is that the cost function can be written as an average C frac sumx Cx over cost functions Cx for individual training examples, x. This is the case for the quadratic cost function, where the cost for a single training example is Cx frac y-aL 2. This assumption will also hold true for all the other cost functions well meet in this book. The reason we need this assumption is because what backpropagation actually lets us do is compute the partial derivatives partial Cx partial w and partial Cx partial b for a single training example. We then recover partial C partial w and partial C partial b by averaging over training examples. In fact, with this assumption in mind, well suppose the training example x has been fixed, and drop the x subscript, writing the cost Cx as C. Well eventually put the x back in, but for now its a notational nuisance that is better left implicit. The second assumption we make about the cost is that it can be written as a function of the outputs from the neural network: For example, the quadratic cost function satisfies this requirement, since the quadratic cost for a single training example x may be written as begin C frac y-aL2 frac sumj (yj-aLj)2, tag end and thus is a function of the output activations. Of course, this cost function also depends on the desired output y, and you may wonder why were not regarding the cost also as a function of y. Remember, though, that the input training example x is fixed, and so the output y is also a fixed parameter. In particular, its not something we can modify by changing the weights and biases in any way, i. e. its not something which the neural network learns. And so it makes sense to regard C as a function of the output activations aL alone, with y merely a parameter that helps define that function. The backpropagation algorithm is based on common linear algebraic operations - things like vector addition, multiplying a vector by a matrix, and so on. But one of the operations is a little less commonly used. In particular, suppose s and t are two vectors of the same dimension. Then we use s odot t to denote the elementwise product of the two vectors. Thus the components of s odot t are just (s odot t)j sj tj. As an example, begin leftbegin 1 2 end right odot leftbegin 3 4end right left begin 1 3 2 4 end right left begin 3 8 end right. tag end This kind of elementwise multiplication is sometimes called the Hadamard product or Schur product . Well refer to it as the Hadamard product. Good matrix libraries usually provide fast implementations of the Hadamard product, and that comes in handy when implementing backpropagation. Backpropagation is about understanding how changing the weights and biases in a network changes the cost function. Ultimately, this means computing the partial derivatives partial C partial wl and partial C partial blj. But to compute those, we first introduce an intermediate quantity, deltalj, which we call the error in the j neuron in the l layer. Backpropagation will give us a procedure to compute the error deltalj, and then will relate deltalj to partial C partial wl and partial C partial blj. To understand how the error is defined, imagine there is a demon in our neural network: The demon sits at the j neuron in layer l. As the input to the neuron comes in, the demon messes with the neurons operation. It adds a little change Delta zlj to the neurons weighted input, so that instead of outputting sigma(zlj), the neuron instead outputs sigma(zljDelta zlj). This change propagates through later layers in the network, finally causing the overall cost to change by an amount frac Delta zlj. Now, this demon is a good demon, and is trying to help you improve the cost, i. e. theyre trying to find a Delta zlj which makes the cost smaller. Suppose frac has a large value (either positive or negative). Then the demon can lower the cost quite a bit by choosing Delta zlj to have the opposite sign to frac . By contrast, if frac is close to zero, then the demon cant improve the cost much at all by perturbing the weighted input zlj. So far as the demon can tell, the neuron is already pretty near optimal This is only the case for small changes Delta zlj, of course. Well assume that the demon is constrained to make such small changes. And so theres a heuristic sense in which frac is a measure of the error in the neuron. Motivated by this story, we define the error deltalj of neuron j in layer l by begin deltalj equiv frac . tag end As per our usual conventions, we use deltal to denote the vector of errors associated with layer l. Backpropagation will give us a way of computing deltal for every layer, and then relating those errors to the quantities of real interest, partial C partial wl and partial C partial blj. You might wonder why the demon is changing the weighted input zlj. Surely itd be more natural to imagine the demon changing the output activation alj, with the result that wed be using frac as our measure of error. In fact, if you do this things work out quite similarly to the discussion below. But it turns out to make the presentation of backpropagation a little more algebraically complicated. So well stick with deltalj frac as our measure of error In classification problems like MNIST the term error is sometimes used to mean the classification failure rate. T. ex. if the neural net correctly classifies 96.0 percent of the digits, then the error is 4.0 percent. Obviously, this has quite a different meaning from our delta vectors. In practice, you shouldnt have trouble telling which meaning is intended in any given usage. Plan of attack: Backpropagation is based around four fundamental equations. Together, those equations give us a way of computing both the error deltal and the gradient of the cost function. I state the four equations below. Be warned, though: you shouldnt expect to instantaneously assimilate the equations. Such an expectation will lead to disappointment. In fact, the backpropagation equations are so rich that understanding them well requires considerable time and patience as you gradually delve deeper into the equations. The good news is that such patience is repaid many times over. And so the discussion in this section is merely a beginning, helping you on the way to a thorough understanding of the equations. Heres a preview of the ways well delve more deeply into the equations later in the chapter: Ill give a short proof of the equations. which helps explain why they are true well restate the equations in algorithmic form as pseudocode, and see how the pseudocode can be implemented as real, running Python code and, in the final section of the chapter. well develop an intuitive picture of what the backpropagation equations mean, and how someone might discover them from scratch. Along the way well return repeatedly to the four fundamental equations, and as you deepen your understanding those equations will come to seem comfortable and, perhaps, even beautiful and natural. An equation for the error in the output layer, deltaL: The components of deltaL are given by begin deltaLj frac sigma(zLj). tag end This is a very natural expression. The first term on the right, partial C partial aLj, just measures how fast the cost is changing as a function of the j output activation. If, for example, C doesnt depend much on a particular output neuron, j, then deltaLj will be small, which is what wed expect. The second term on the right, sigma(zLj), measures how fast the activation function sigma is changing at zLj. Notice that everything in (BP1) begin deltaLj frac sigma(zLj) nonumberend is easily computed. In particular, we compute zLj while computing the behaviour of the network, and its only a small additional overhead to compute sigma(zLj). The exact form of partial C partial aLj will, of course, depend on the form of the cost function. However, provided the cost function is known there should be little trouble computing partial C partial aLj. For example, if were using the quadratic cost function then C frac sumj (yj-aLj)2, and so partial C partial aLj (ajL-yj), which obviously is easily computable. Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend is a componentwise expression for deltaL. Its a perfectly good expression, but not the matrix-based form we want for backpropagation. However, its easy to rewrite the equation in a matrix-based form, as begin deltaL nablaa C odot sigma(zL). tag end Here, nablaa C is defined to be a vector whose components are the partial derivatives partial C partial aLj. You can think of nablaa C as expressing the rate of change of C with respect to the output activations. Its easy to see that Equations (BP1a) begin deltaL nablaa C odot sigma(zL) nonumberend and (BP1) begin deltaLj frac sigma(zLj) nonumberend are equivalent, and for that reason from now on well use (BP1) begin deltaLj frac sigma(zLj) nonumberend interchangeably to refer to both equations. As an example, in the case of the quadratic cost we have nablaa C (aL-y), and so the fully matrix-based form of (BP1) begin deltaLj frac sigma(zLj) nonumberend becomes begin deltaL (aL-y) odot sigma(zL). tag end As you can see, everything in this expression has a nice vector form, and is easily computed using a library such as Numpy. An equation for the error deltal in terms of the error in the next layer, delta : In particular begin deltal ((w )T delta ) odot sigma(zl), tag end where (w )T is the transpose of the weight matrix w for the (l1) layer. This equation appears complicated, but each element has a nice interpretation. Suppose we know the error delta at the l1 layer. When we apply the transpose weight matrix, (w )T, we can think intuitively of this as moving the error backward through the network, giving us some sort of measure of the error at the output of the l layer. We then take the Hadamard product odot sigma(zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017

No comments:

Post a Comment