text
stringlengths 1
6.06k
| source
stringclasses 375
values | page
int64 0
1.49k
| book
stringclasses 375
values | chunk_index
int64 0
0
|
|---|---|---|---|---|
(c) Giovanni De Micheli 38 Parallel Boolean optimization compatible don’t care sets NDetermine a subset of don’t care sets which is safe to use in a parallel minimization LRemove those degrees of freedom that can lead to transformations incompatible with others effected in parallel NUsing compatible don’t care sets, only upper bounds on the perturbation need to be satisfied NFaster and efficient method
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 37
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 39 Example NParallel optimization at two vertices NFirst vertex x LCODC equal to ODC set LCODCx = ODCx NSecond vertex y LCODC is smaller than its ODC to be safe enough to allow for transformations permitted by the first ODC LCODCy = Cx (ODCy) + ODCy ODC’x NOrder dependence
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 38
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 40 Example NCODCy = ODCy = x’ = b’ + a’ NODCx = y’ = b’ + c’ NCODCx = Cy(ODCx) + ODCx(ODCy)’ = Cy(y’) + y’x = y’x = (b’ + c’)ab = abc’ x y b c a z
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 39
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 41 Example (2) NAllowed perturbation: Lfy = bc →gy = c Lδy = bc Å c = b’c Í CODCy = b’ + a’ NDisallowed perturbation: Lfx = ab →gx = a Lδx = ab Å a = ab’ Ë CODCx = abc’ x y b c a z
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 40
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 42 Boolean methods Summary NBoolean methods are powerful means to restructure networks LComputationally intensive NBoolean methods rely heavily on don’t care computation LEfficient methods LPossibility to subset the don’t care sets NBoolean method often change the network substantially, and it is hard to undo Boolean transformations
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 41
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 43 Module 2 NObjectives LTestability LRelations between testability and Boolean methods
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 42
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 44 Testability NGeneric term to mean easing the testing of a circuit NTestability in logic synthesis context LAssume combinational circuit LAssume single/multiple stuck-at fault NTestability is referred to as the possibility of generating test sets for all faults LProperty of the circuit LRelated to fault coverage
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 43
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 45 Test for stuck-ats NNet y stuck-at 0 LInput pattern that sets y to TRUE LObserve output LOutput of faulty circuit differs from correct circuit NNet y stuck-at 1 LInput pattern that sets y to FALSE LObserve output LOutput of faulty circuit differs from correct circuit NTesting is based on controllability and observability
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 44
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 46 Test sets – don’t care interpretation NStuck-at 0 on net y L{ Input vector t such that y(t) ODC’y (t) = 1 } NStuck-at 1 on net y L{ Input vector t such that y’(t) ODC’y (t) = 1 }
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 45
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 47 Using testing methods for synthesis NRedundancy removal LUse ATPG to search for untestable fault NIf stuck-at 0 on net y is untestable: LSet y = 0 LPropagate constant NIf stuck-at 1 on net y is untestable LSet y = 1 LPropagate constant NIterate for each untestable fault
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 46
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 48 Example
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 47
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 49 Redundancy removal and perturbation analysis NStuck-at 0 on y Ly set to 0. Namely gx = fx|y=0 LPerturbation: Mδ = fx Å fx|y=0 = y· ∂fx/∂y NPerturbation is feasible Û fault is untestable LNo input vector t can make y(t)· ODCy’(t) true LNo input vector t can make y(t)· ODCx’(t)· ∂fx/∂y true MBecause ODCy = ODCx + (∂fx/∂y)’ z x y
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 48
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 50 Redundancy removal and perturbation analysis NAssume untestable stuck-at 0 fault. Ny· ODCx’· ∂fx/∂y Í SDC NLocal don’t care set: LDCx Ê ODCx + y· ODCx’· ∂fx/∂y LDCx Ê ODCx + y· ∂fx/∂y NPerturbation δ = y· ∂fx/∂y LIncluded in the local don’t care set
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 49
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 51 Rewiring NExtension to redundancy removal LAdd connection in a circuit LCreate other redundant connections LRemove redundant connections NIterate procedure to reduce network LA connection corresponds to a wire LRewiring modifies gates and wiring structure LWires may have specific costs due to distance
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 50
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 52 Example g c f h a b c d x y z m
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 51
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 53 Synthesis for testability NSynthesize fully testable circuits LFor single or multiple stuck-at faults NRealizations LTwo-level forms LMulti-level networks NSince synthesis can modify the network properties, testability can be addressed during synthesis
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 52
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 54 Two-level forms NFull testability for single stuck-at faults: LPrime and irredundant covers NFull testability for multiple stuck-at faults LPrime and irredundant cover when MSingle output function MNo product-term sharing MEach component is prime and irredundant
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 53
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 55 Example f = a’b’ + b’c + ac + ab
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 54
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 56 Multiple-level networks NConsider logic networks with local functions in sop form NPrime and irredundant network LNo literal and no implicant of any local function can be dropped LThe AND-OR implementation is fully testable for single stuck-at faults NSimultaneous prime and irredundant network LNo subsets of literals and no subsets of implicants can be dropped LThe AND-OR implementation is fully testable for multiple stuck-ats
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 55
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 57 Synthesis for testability NHeuristic logic minimization (e.g., Espresso) is sufficient to insure testability of two-level forms NTo achieve fully testable networks, simplification has to be applied to all logic blocks with full don’t care sets NIn practice, don’t care sets change as neighboring blocks are optimized NRedundancy removal is a practical way of achieving testability properties
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 56
|
DT12 (ml bool)
| 0
|
(c) Giovanni De Micheli 58 Summary – Synthesis for testability NThere is synergy between synthesis and testing LDon’t care conditions play a major role in both fields NTestable network correlate to a small area implementation NTestable network do not require to slow-down the circuit NAlgebraic transformations preserve multi-fault testability, and are preferable under this aspect
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf
| 57
|
DT12 (ml bool)
| 0
|
Advanced Probability and Applications EPFL - Spring Semester 2022-2023 Solutions to Homework 2 Exercise 1. a) 1. true, 2. false, 3. false, 4. true b) 5. false, 6. true, 7. false, 8. true. Exercise 2*. a) We have P({Yn ≤t}) = 1 −P({Yn > t}) = 1 −P({min{X1, . . . , Xn} > t}) = 1 −P(∩n j=1{Xj > t}) = 1 −Qn j=1 P({Xj > t}) = 1 −P({X1 > t})n where the last two equalities follow from the assumption that the X’s are i.i.d. Therefore, P({Yn ≤t}) = 1 −(exp(−t))n = 1 −exp(−nt) b) Under the assumptions made, n is large and t is such that nt ≪1, so using Taylor’s expansion exp(−x) <unk>1 −x, we obtain P({Yn ≤t}) <unk>1 −(1 −nt) = nt while P({X1 ≤t}) = 1 −exp(−t) <unk>t and therefore P({Yn ≤t}) <unk>n P({X1 ≤t}). c) We have similarly P({Zn ≥t}) = 1 −P({Zn < t}) = 1 −P({max{X1, . . . , Xn} < t}) = 1 −P(∩n j=1{Xj < t}) = 1 − n Y j=1 P({Xj < t}) = 1 −P({X1 < t})n = 1 −(1 −exp(−t))n d) Under the assumptions made, n is large and t is such that n exp(−t) ≪1, so using again the same Taylor expansion as above, we obtain P({Zn ≥t}) <unk>1 −(1 −n exp(−t)) = n exp(−t) while P({X1 ≥t}) = exp(−t
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol2_1.pdf
| 0
|
sol2_1
| 0
|
above, we obtain P({Zn ≥t}) <unk>1 −(1 −n exp(−t)) = n exp(−t) while P({X1 ≥t}) = exp(−t) and therefore P({Zn ≥t}) <unk>n P({X1 ≥t}). Exercise 3. a) Here are 3 possible subsets A1, A2, A3 of Ω= {1, 2, 3, 4}: A1 = {1, 2}, A2 = {1, 3} and A3 = {1, 4}. We check that P(Aj) = 1 2 ∀j and P(Aj ∩Ak) = 1 4 = P(Aj) · P(Ak) ∀j <unk>= k but P(A1 ∩A2 ∩A3) = 1 4 <unk>= 1 8 = P(A1) · P(A2) · P(A3) b) Here are 3 possible subsets A1, A2, A3 of Ω= {1, 2, 3, 4, 5, 6}: A1 = {1, 2, 3}, A2 = {3, 4, 5} and A3 = {1, 3, 4, 6}. We check that P(A1 ∩A2 ∩A3) = 1 6 = 1 2 · 1 2 · 2 3 = P(A1) · P(A2) · P(A3) 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol2_1.pdf
| 0
|
sol2_1
| 0
|
but P(A1 ∩A2) = 1 6 <unk>= 1 4 = P(A1) · P(A2) c) Using the assumptions made, we check successively (the roles of A1, A2, A3 being permutable): P(A1 ∩A2 ∩Ac 3) = P(A1 ∩A2) −P(A1 ∩A2 ∩A3) = P(A1) · P(A2) −P(A1) · P(A2) · P(A3) = P(A1) · P(A2) · (1 −P(A3)) = P(A1) · P(A2) · P(Ac 3) P(A1 ∩Ac 2 ∩Ac 3) = P(A1 ∩Ac 3) −P(A1 ∩A2 ∩Ac 3) = P(A1) · P(Ac 3) −P(A1) · P(A2) · P(Ac 3) = P(A1) · (1 −P(A2)) · P(Ac 3) = P(A1) · P(Ac 2) · P(Ac 3) P(Ac 1 ∩Ac 2 ∩Ac 3) = P(Ac 2 ∩Ac 3) −P(A1 ∩Ac 2 ∩Ac 3) = P(Ac 2) · P(Ac 3) −P(A1) · P(Ac 2) · P(Ac 3) = (1 −P(A1)) · P(Ac 2) · P(Ac 3) = P(Ac 1) · P(Ac 2) · P(Ac 3) Exercise 4. a) No. Even though it is easily shown that Y and Z are uncorrelated random variables (i.e., that their covariance is zero), they are not independent. Here is a counter-example: P({Y = +2}) = P({Z = +2}) = 1/4, but P({Y = +2, Z = +2}) = 0. So we have found two Borel sets B1 = {+2} and B2 = {+2} such that P({Y ∈B1, Z ∈B2}) <unk>= P({Y ∈B1}) · P({Z ∈B2}) b) Yes. In this case again, one checks easily that Y and Z are uncorrelated. Let us now comp
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol2_1.pdf
| 1
|
sol2_1
| 0
|
<unk>= P({Y ∈B1}) · P({Z ∈B2}) b) Yes. In this case again, one checks easily that Y and Z are uncorrelated. Let us now compute their joint pdf: the joint pdf of X1 and X2 is given by pX1,X2(x1, x2) = 1 2π exp −x2 1 + x2 2 2 Making now the change of variables y = x1 + x2, z = x1 −x2, or equivalently x1 = y+z 2 , x2 = y−z 2 , we obtain x2 1 + x2 2 = y + z 2 2 + y −z 2 2 = y2 + z2 2 and the Jacobian of this linear transformation is given by J(y, z) = det ∂x1 ∂y ∂x2 ∂y ∂x1 ∂z ∂x2 ∂z ! = det 1/2 1/2 1/2 −1/2 = −1 2 so that pY,Z(y, z) = pX1,X2(x1(y, z), x2(y, z)) · |J(y, z)| = 1 4π exp −y2 + z2 4 from which we deduce that Y and Z are independent N(0, 2) random variables. 2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol2_1.pdf
| 1
|
sol2_1
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 1 Adapted from slides originally developed by Profs. Hill, Hoe, Falsafi and Wenisch of CMU, EPFL, Michigan, Wisconsin Specialization Fall 2021 Prof. Babak Falsafi https://parsa.epfl.ch/course-info/cs471/ CS471
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 0
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 2 Where are we? uSpecialized HW § Inefficiencies in GPPs § Meet the Walkers uThursday § GPUs uReminder: § Exam review session is today M T W T F 20-Sep 21-Sep 22-Sep 23-Sep 24-Sep 27-Sep 28-Sep 29-Sep 30-Sep 01-Oct 04-Oct 05-Oct 06-Oct 07-Oct 08-Oct 11-Oct 12-Oct 13-Oct 14-Oct 15-Oct 18-Oct 19-Oct 20-Oct 21-Oct 22-Oct 25-Oct 26-Oct 27-Oct 28-Oct 29-Oct 01-Nov 02-Nov 03-Nov 04-Nov 05-Nov 08-Nov 09-Nov 10-Nov 11-Nov 12-Nov 15-Nov 16-Nov 17-Nov 18-Nov 19-Nov 22-Nov 23-Nov 24-Nov 25-Nov 26-Nov 29-Nov 30-Nov 01-Dec 02-Dec 03-Dec 06-Dec 07-Dec 08-Dec 09-Dec 10-Dec 13-Dec 14-Dec 15-Dec 16-Dec 17-Dec 20-Dec 21-Dec 22-Dec 23-Dec 24-Dec
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 1
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 3 Reader u Govindaraju, et al., Dynamically Specialized Datapaths for Energy Efficient Computing
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 2
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 4 UNDERSTANDING SOURCES OF INEFFICIENCY IN GENERAL-PURPOSE CHIPS & CONSERVATION CORES The following slides are from Hameed et. al. & Venkatesh et al.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 3
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 5 The Utilization Wall (aka The Rise of Dark Silicon) u Scaling theory § Transistor and power budgets no longer balanced § Exponentially increasing problem! u Experimental results § Replicated small datapath § More ‘Dark Silicon’ than active u Observations in the wild § Flat frequency curve § “Turbo Mode” § Increasing cache/processor ratio Classical scaling Device count S2 Device frequency S Device power (cap) 1/S Device power (Vdd) 1/S2 Utilization 1 Leakage limited scaling Device count S2 Device frequency S Device power (cap) 1/S Device power (Vdd) ~1 Utilization 1/S2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 4
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 6 The Utilization Wall u Scaling theory § Transistor and power budgets no longer balanced § Exponentially increasing problem! u Experimental results § Replicated small datapath § More ‘Dark Silicon’ than active u Observations in the wild § Flat frequency curve § “Turbo Mode” § Increasing cache/processor ratio 2x 2x 2x
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 5
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 7 The Utilization Wall u Scaling theory § Transistor and power budgets no longer balanced § Exponentially increasing problem! u Experimental results § Replicated small datapath § More ‘Dark Silicon’ than active u Observations in the wild § Flat frequency curve § “Turbo Mode” § Increasing cache/processor ratio 3x 2x
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 6
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 8 The Utilization Wall u Scaling theory § Transistor and power budgets no longer balanced § Exponentially increasing problem! u Experimental results § Replicated small datapath § More ‘Dark Silicon’ than active u Observations in the wild § Flat frequency curve § “Turbo Mode” § Increasing cache/processor ratio 3x 2x
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 7
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 9 The Utilization Wall u Scaling theory § Transistor and power budgets no longer balanced § Exponentially increasing problem! u Experimental results § Replicated small datapath § More ‘Dark Silicon’ than active u Observations in the wild § Flat frequency curve § “Turbo Mode” § Increasing cache/processor ratio u We’re already here 3x 2x
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 8
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 10 Utilization Wall: Dark Implications for Multicore 4 cores @ 3 GHz 4 cores @ 2x3 GHz (12 cores dark) 2x4 cores @ 3 GHz (8 cores dark) (Industry’s Choice) .... 65 nm 32 nm .... .... Spectrum of tradeoffs between # cores and frequency. e.g.; take 65 nmà32 nm; i.e. (s =2)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 9
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 11 The Four Horsemen of Dark Silicon [Taylor] Shrinking Horseman Dim Horseman Bespoke Horseman Deus Ex Machina Horseman
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 10
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 12 GP Processors Are Inefficient uProcessors work well for a broad range of applications § Have well amortized NRE § For a specific performance target, energy and area efficiency is low uProcessors are power limited § Hard to meet performance and energy of emerging applications • Enhancement of low-quality video, analysis and capture motion in 3D, etc § At fixed power, more ops/sec requires lower energy/op Emerging Applications vs. Haswell
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 11
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 13 More Efficient Computing Is Possible uEmbedded media devices perform GOP/s § Cell phones, video cameras, etc uEfficiency of processors inadequate for these devices § ASICs needed to meet stringent efficiency requirements uASICs are difficult to design and inflexible Emerging Applications ASIC
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 12
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 14 GP Processor vs. ASIC uASICs typically much more efficient than processors § Orders of magnitude gap in performance and energy uIf processors are energy limited § They will need to use ASIC “tricks” § We need to figure out the sources of processor inefficiency vs.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 13
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 15 An Example uHigh definition video encoding is ubiquitous § Cell phones, camcorders, point and shoot cameras, etc. uA small ASIC does it § Can easily satisfy performance and efficiency requirements § Google built this for their datacenters [Ranganathan,ASPLOS’21] uVery challenging for processors § What makes the processors inefficient compared to ASICs? § What does it to take to make a processor as efficient as an ASIC? § How much programmability do you lose?
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 14
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 16 CMP Energy Breakdown uAssume everything but functional unit is overhead § Still 25x worse (improves by 20x) For HD H.264 encoder • 2.8GHz Pentium 4 is 500x worse in energy* • Four-core Tensilica based CMP is also 500x worse in energy* * Chen, T.-C., et al., "Analysis and architecture design of an HDTV720p 30 frames/s H.264/AVC encoder," Circuits and Systems for Video Technology, IEEE Transactions on, vol.16, no.6, pp. 673-688, June 2006.
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 15
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 17 Achieving ASIC Efficiencies: Getting to 500x uNeed basic ops that are extremely low-energy § Function units have overheads over raw operations § 8-16 bit operations have energy of sub pJ • Function unit energy for RISC was around 5pJ uAnd then don’t mess it up § “No” communication energy / op • This includes register and memory fetch § Merging of many simple operations into mega ops • Eliminate the need to store / communicate intermediate results
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 16
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 18 How Much Specialization Is Needed? uHow far will general purpose optimizations go? § Can we stay clear of application specific optimizations? § How close to ASIC efficiencies will this achieve? uBetter understand nature of various overheads § What are the “long poles” that need to be removed uIs there an incremental path from GP to ASIC § Is it possible to create an intermediate solution?
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 17
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 19 Case Study uUse Tensilica to create optimized processors uTransform CMP into an efficient HD H.264 encoder § To better understand the sources of overhead in processor uWhy H.264 Encoder? § It’s everywhere § Variety of computation motifs – data parallel to control intensive § Good software and hardware implementations exist • ASIC H.264 solutions demonstrate a large energy advantage
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 18
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 20 Optimization Strategy For Case Study uTwo optimization stages uGeneral purpose, data parallel optimizations § SIMD, VLIW, reduced register and data path widths § Operation fusion – limited to two inputs and one output • Similar to Intel’s SSE instructions uApplication specific optimizations § Arbitrary new compute operations § Closely couple data storage and data-path structures
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 19
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 21 What Is H.264? uIndustry standard for video compression § Digital television, DVD-video, mobile TV, internet video, etc. Prediction Transform/ Quantize Entropy Encode Inter prediction Intra prediction (IP) Integer and Fractional Motion Estimation (IME, FME) CABAC
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 20
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 22 Data Parallel Computational Motifs Mapping Prediction Transform/ Quantize Entropy Encode Inter prediction Intra prediction Sequential
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 21
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 23 H.264 Encoder - Uni-processor Performance IME and FME dominate total execution time CABAC is small but dictates final gain
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 22
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 24 H.264 – Macroblock Pipeline
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 23
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 25 Base CMP vs. ASIC u Huge efficiency gap § 4-proc CMP 250x slower § 500x extra energy u Manycore doesn’t help § Energy/frame remains same § Performance improves Energy Gap Core Overhead (Normalized to ASIC)
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 24
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 26 General Purpose Extensions: SIMD & ILP uSIMD § Up to 18-way SIMD in reduced precision uVLIW § Up to 3-slot VLIW Load Add Load Add Load Add 12 bit 16x8 bit 16x12 bit accumulator
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 25
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 27 SIMD and ILP - Results uOrder of magnitude improvement in performance, energy § For data parallel algorithms § But ASIC still better by roughly 2 orders of magnitude
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 26
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 28 SIMD and ILP – Results uGood news: we made the FU more efficient § Reduced the power of the op by 4x, bit width optimization uBad news: overhead decreased by only 2x Most pipeline stages still dissipate 10x energy
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 27
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 29 Operation Fusion uCompiler can find interesting instructions to merge § Tensilica’s Xpres uWe did this manually § Tried to create instructions that might be possible uMight be free in future machines § Common instruction might be present in GP
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 28
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 30 Operation Fusion – Not A Big Gain Helps a little, so it is good if free ... 50x less energy efficient and 25x slower ASIC
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 29
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 31 Data Parallel Optimization Summary u Great for data parallel applications § Improve energy efficiency by 10x over CPU § But CABAC largely remains unaffected u Synthetic overheads from GP architecture still dominate § Basic operations are very low-energy § Even with 15-20 operations per instruction, get 90% overhead § Data movement dominates computation u To get ASIC efficiency need more compute/overhead § Find functions with large compute/low communication § Aggregate work in large chunks to create highly optimized FUs § Merge data-storage and data-path structures
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 30
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 32
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 31
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 33 “Magic” Instructions uFuse computational unit to storage uCreate specialized data storage structures § Require modest memory bandwidth to keep full § Internal data motion is hard wired § Use all the local data for computation uArbitrary new low-power compute operations uLarge effect on energy efficiency and performance Merged Register / Hardware Block
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 32
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 34 usum = sum + abs(xref – xcur) uLooking for the difference between two images § Hundreds of SAD calculations to get one image difference • Need to test many different position to find the best § Data for each calculation is nearly the same Magic Instructions – SAD Search Center Candidate Block Candidate Motion Vector
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 33
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 35 Magic Instructions - SAD uSIMD implementation § Limited to 16 operations per cycle § Horizontal data-reuse requires many shift operations § No vertical data reuse means wasted cache energy § Significant register file access energy uMagic – Serial in, parallel out structure § Enables 256 SADs/cycle which reduces fetch energy § Vertical data-reuse results in reduced DCache energy § Dedicated paths to compute reduce register access energy Search Center
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 34
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 36 Custom SAD instruction Hardware u Reference Pixel Registers: § Horizontal and vertical shift with parallel access to all rows 16 Pixels 16 Pixels 16 Pixels 16 Pixels 16 Pixels 16 Pixels 16 Pixels 16 Pixels 128-Bit Load 128-Bit Load 16 Pixels 16 Pixels 16 Pixels 16 Pixels Four 4x1 SAD Units 128 Bit Write Port Four 4x1 SAD Units Four 4x1 SAD Units Four 4x1 SAD Units 256 SAD Units Current Pixel Registers
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 35
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 37 Fractional Motion Estimation uTake the output from the integer motion estimation § Run again against base image shifted by 1⁄4 of a pixel § Need to do this in X and Y Search Center Candidate Block Candidate Motion Vector
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 36
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 38 Generating the Shifted Images: Pixel Upsampling uxn = x-2 – 5x-1 + 20x0 + 20x1 – 5x2 + x3 uFIR filter requiring one new pixel per computation § Regular register files require 5 transfers per op § Wasted energy in instruction fetch and register file uAugment register files with a custom shift register § Parallel access of entries to create custom FIR arithmetic unit § Result dissipates 1/30th of energy of traditional approach
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 37
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 39 Custom FME uCustom upsampling datapath
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 38
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 40 Custom FME uCustom upsampling datapath
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 39
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 41 Custom FME uCustom upsampling datapath 4
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 40
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 42 List Of Other Magic Instructions uHadamard/DCT § Matrix transpose unit § Operation fusion with no limitation on number of operands uIntra Prediction § Customized interconnections for different prediction modes uCABAC § FIFO structures in binarization module § Fundamentally different computation fused with no restrictions Not many needed, similar constraints
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 41
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 43 Magic Instructions - Energy Orders of magnitude better than GP, within 3x of ASIC
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 42
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 44 Magic instructions - Results uOver 35% energy now in ALU § Overheads are well-amortized – up to 256 ops / instruction § More data re-use within the data-path uMost of the code involves magic instructions
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 43
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 45 Magic Instructions Summary u Optimization strategy similar across all algorithms § Closely couple data storage and data path structures § Maximize data reuse inside the datapath u Commonly used hardware structures and techniques § Shift registers with parallel access to internal values § Direct computation of the desired output • Eliminate the generation (and storage) of intermediate results u Hundreds of extremely low-power ops per instruction u Works well for both data parallel and sequential algorithms
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 44
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 46 Conclusion uMany operations are very simple and low energy § They SIMD/Vector parallelize well, but overheads still dominate § To get ASIC efficiencies, need 100s ops/instruction • Specialized hardware/memory uBuilding ASIC hardware in a processor worked well § Easier than building an ASIC, since it was incremental § Start with strong software development environment • Add and debug only the hardware you need uEfficient hardware requires customization § We should make doing chip customization feasible § And that means we should design chip generators and not chips
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 45
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 47 Meet the Walkers Accelerating Index Traversals for In-Memory Databases Onur Kocberber Boris Grot, Javier Picorel, Babak Falsafi, Kevin Lim, Parthasarathy Ranganathan
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 46
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 48 Our World is Data-Driven! uData resides in huge databases § Most frequent task: find data uIndexes used for fast data lookup § Rely on pointer-intensive data structures uIndex lookup efficiency is critical § Many requests, abundant parallelism § Power-limited hardware Need high-throughput and energy-efficient index lookups Data Index
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 47
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 49 Inst. Window OoO Core Index Lookups on General-Purpose Cores OoO cores are ill-matched to indexing Index Lookups • Data in memory • Inherent parallelism OoO Cores • Pointer-chasing à Low MLP • Limited OoO inst. window – One lookup at a time Index Lookups Memory
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 48
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 50 Widx: an Index Lookup Widget uSpecialized: Custom HW for index lookups uSwitch fewer transistors per lookup uParallel: Multiple lookups at once uExtract parallelism beyond the OoO exec. window uProgrammable: Simple RISC cores uTarget a wide range of DBMSs 3x higher throughput, 5.5x energy reduction vs. OoO
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 49
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 51 Outline uIntroduction uIndexing in database systems uIndexing inefficiencies in modern processors uWidx uEvaluation highlights uSummary
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 50
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 52 Modern Databases & Index Lookups uIndexes are essential for all database operators § Data structures for fast data lookup uHash index: fundamental index structure uDominant operation: join via hash index T R C Q E K Hash Index 0 1 2 Hash Walk K
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 51
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 53 Join Join: find the matching values in A and B Join via Hash Index Lookup on index for every entry in A Hash Index on B A Hash B Walk
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 52
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 54 How Much Time is Spent in Lookups? 0 25 50 75 100 2 3 5 7 8 9 11 13 14 15 17 18 19 20 21 22 5 37 40 43 46 52 64 81 82 TPC-H TPC-DS % of Execution Time Index Scan Sort & Join Other Indexing is the biggest contributor to execution time Measurement on Xeon 5670 CPU with 100GB Dataset
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 53
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 55 Dissecting Index Lookups uHash: Avg. 30% time of each lookup § Computationally intensive, high cache locality uWalk: Avg. 70% time of each lookup § Trivial computation, low cache locality uNext lookup: Inherently parallel § Beyond the inst. window capacity Inst. Window OoO Core
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 54
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 56 Roadmap for Efficient and High-Throughput Index Lookups uSpecialize § Customize hardware for hashing and walking uParallelize § Perform multiple index lookups at a time uGeneralize § Use a programmable building block
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 55
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 57 Step 1: Specialize uDesign a dedicated unit for hash and walk § Hash: compute hash values from a key list § Walk: access the hash index and follow pointers Specialized hash and walk hardware General-purpose OoO Hash Walk
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 56
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 58 Step 2: Parallelize Time Serial Decoupled Decoupled & Parallel H W H W H W H W
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 57
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 59 Step 3: Generalize uWidx unit: common building block for hash and walk § Two-stage RISC core § Custom ISA uWidx units are programmable § Execute functions written in Widx ISA § Support limitless number of data structure layouts Widx unit hash( ) H walk( ) W
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 58
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 60 Putting it all together: Widx uWhen Widx runs, core goes idle Widx Hash H MMU OoO Core L1 Widx Simple, parallel hardware W W W W P Result Producer Walkers
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 59
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 61 Programming Model DBMS Index Code Write code for each unit and compile for Widx ISA Communicate query-specific inputs Execution Development H W P Hash Walk Res. Produce Load the code hash (arg1, arg2, ...) {......} walk (arg1, arg2, ...) {......} emit (arg1, arg2, ...) {......}
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 60
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 62 Methodology uFlexus simulation infrastructure [Wenisch '06] Benchmarks – TPC-H on MonetDB – TPC-DS on MonetDB – Dataset: 100GB uArch Parameters – Core Types • OoO: 4-wide, 128-entry ROB • In-order: 2-wide – Frequency: 2GHz – L1 (I & D): 32KB – LLC: 4MB Area and Power – Synopsys Design Compiler – Technology node: TSMC 40 nm, std. cell – Frequency: 2GHz – Widx Area: 0.24mm2 – Widx Power: 0.3W
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 61
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 63 Widx Performance 0 1 2 3 4 5 6 qry2 qry11 qry17 qry19 qry20 qry22 qry5 qry37 qry40 qry52 qry64 qry82 TPC-H TPC-DS Indexing Speedup OoO Widx 3x higher index lookup throughput
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 62
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 64 0 0.5 1 1.5 2 2.5 OoO Widx (w/ OoO) In-order Runtime Widx Efficiency 5.5x reduction in index lookup energy vs. OoO 0 0.5 1 1.5 2 2.5 OoO Widx (w/ OoO) In-order Energy 0 0.5 1 1.5 2 2.5 OoO Widx (w/ OoO) In-order Energy-Delay
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 63
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 65 Walkers in Software [VLDB’16] uUse insights to help Xeon § Decouple hash & walk in software § Create & manage queues in wraparound code u2.3x speedup on Xeon § Unclogs dependences in microarchitecture § Maximizes memory level parallelism § To be integrated in SAP HANA [VLDB’18]
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 64
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 66 Conclusions uIndex lookups are essential in modern DBMSs uModern CPUs spend significant time in index lookups § Not efficient & fall short of extracting parallelism uWidx: Specialized widget for index lookups § Efficient, parallel & programmable 3x higher throughput, 5.5x energy reduction vs. OoO
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 65
|
14_Specialization
| 0
|
CS 471 – Fall 2021 Lec. 14 - Slide 67 Summary uScaling trends push designers away from general-purpose cores uCharacterization of a ground-up accelerator for H.264 § Show that data movement and pipeline logic remain bottlenecks § Even with wide vectorized and fused operations § ”Magic instruction” approach provides orders of magnitude more arithmetic per control instruction uWidx – a hash join accelerator § Remove inefficient hashing § Overlap expensive walking in parallel
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/14_Specialization.pdf
| 66
|
14_Specialization
| 0
|
Prob s ́erie 9 Lucie Perrotta May 1, 2018 1 Pour Bernoulli(λ/n) la fonction caract ́eristique est ΦX(t) = 1−λ n + λ neit = 1+ λ n(eit −1) et la CF d’une somme de VA iid est le produit de leurs CFs donc ΦSn(t) = n Y i=1 ΦX(t) = ΦX(t)n = (1 + λ n(eit −1))n Parall`element on calcule la CF de poisson, c`ad ΦZ(t) = eλ(eit−1). Maintenant si on fait la limite de ΦSn(t) on trouve lim n→∞(1 + λ n(eit −1))n = eλ(eit−1) en utilisant la formule de la limite de l’exponentielle lim(1 + x/n)n = ex avec x = λ(eit −1). 2 a) φX1(t) = e−λ|t|, t ∈R. Avec l’inversion formula 2 on calcule 1 2π Z ∞ −∞ e−itxφ(t)dt = 1 2π Z ∞ −∞ e−itxe−λ|t|dt = 1 2π Z 0 −∞ e−itxeλtdt + 1 2π Z ∞ 0 e−itxe−λtdt 1 2π 1 λ −ix + 1 λ + ix = 2λ 2π(λ2 + x2) b) P(|X1| ≤λ) = P(X1 ≤λ) ∩P(X1 ≥−λ) = 1 2π Z λ −λ 2λ λ2 + x2 dx = 1 2π Z λ −λ 2λ λ −ixdx + 1 2π Z λ −λ 2λ λ + ixdx = i 2π log λ −ix λ + ix |x=λ x=−λ = i 2π log(−1) = 1 2π π = 1 2π c) φSn/n(t) = E[eit Sn n ] = φSn( t n) donc on calcule `a la place φsn(t) = n Y k=1 E[etXk] = φX1(t)n = e−nλ
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 0
|
Serie 9
| 0
|
Sn n ] = φSn( t n) donc on calcule `a la place φsn(t) = n Y k=1 E[etXk] = φX1(t)n = e−nλ|t| et on injecte φSn/n(t) = e−λ|t| car n est positif. 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 0
|
Serie 9
| 0
|
d) ̧ca tend vers X1. e) Selon le conseil d’Olivier, un contrexemple est : S2n 2n −Sn n <unk>→n→∞0 alors qu’`a l’infini ces 2 suites devraient ˆetre ́egales pour tout n. 3 a) On prend U = 1, Z = E(X|G). E(E(X|G)) = E(ZU) = E(XU) = E(X) en utilisant la proposition 1.1 b) En utilisant l’ind ́ependance on obtient E(XU) = E(X)E(U) = E(E(X)U) = E(ZU) et comme Z = E(X) est trivialement G-mesurable, alors E(X) = Z = E(X|G). c) Si X est lui mˆeme G-mesurable, alors X = Z remplit les crit`eres de la proposition 1.1, donc X est sa propre conditional expectation, ie. X = E(X|G). d) Avec ZX = E(X|G), ZXY = E(XY |G) et X′ = E(X|G)Y . E(ZXY U) △= E(XY U) U ′=UY = E(XU ′) = E(ZXU ′) = E(E(X|G)U ′) U ′=UY = E(E(X|G)Y U) = E(X′U) (Peut-ˆetre plus clair en lisant de droite `a gauche). e) On utilise c) pour montrer que H ∈G ⇒E(X|H) est G mesurable donc E(E(X|H)|G) = E(X|H). Ensuite on a E(E(X|H)U) = E(XU) avec U qui est H-mesurable et parall`element on a E(E(X|G)U) = E(XU) avec U qui est G-mesurable. Cela implique que le U est aussi H-mesurable. Donc E(E(X|H)U) = E(E(X|G)U) et r ́ecursivement on trouve l
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 1
|
Serie 9
| 0
|
mesurable. Cela implique que le U est aussi H-mesurable. Donc E(E(X|H)U) = E(E(X|G)U) et r ́ecursivement on trouve l’ ́egalit ́e voulue. 4 a) E(ψ(Y )g(Y )) = X ψ(Y )g(Y )P(Y = y) = X X x xP(X = x|Y = y)g(Y )P(Y = y) X X x xP(X = x, Y = y)g(Y ) = E(Xg(Y )) et avec la d ́efinition 1.6 on peut conclure la preuve demand ́ee. 2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 1
|
Serie 9
| 0
|
b) On renomme X = max(Y, Z). E(X|Z) = X x∈C xP(X = x|Z = z) = X x∈C xP(X = x ∩Z = z) P(Z = z) = 4 X x∈C xP(X = x ∩Z = z) = ((1 + 2 + 3 + 4)1 4)z=1 + (21 2 + (3 + 4)1 4)z=2 + (33 4 + 41 4)z=3 + 4z=4 1 4 [10z=1 + 11z=2 + 13z=3 + 16z=4] = <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> 2.5 , si z=1 2.75 , si z=2 3.25 , si z=3 4 , si z=4 5 a) On plot la empirical MSE pour les 4 estimateurs avec 10000 samples, avec a allant de 0 `a 3. Notez que l’ ́echelle de l’axe y est logarithmique. Figure 1 Sur le graphe, l’axe x est un peu d ́ecal ́e et mutilipli ́e par 1000, imaginez qu’il va de 0 `a 3. On remaque que le 4`eme estimateur (tanh) est toujours le meilleur et minimise la MSE. b) C’est tanh car c’est une fonction allant de -1 `a 1 et qui est continue et smooth, c’est-`a-dire que les valeurs proches de 0 seront mapp ́es sur quelque chose proche de 0 aussi, contrairement `a la fonction sgn qui les ́ecarte vers -1 ou 1, augmentant l’erreur en cas de bruit. Les 2 premiers estimateurs ne sont pas born ́es entre -1 et 1. c) Tout d’abord on r ́e ́ecrit l’expression comme E( ˆX2) = E(X ˆX). (i) E( ˆX2) = 1 a2 a2E(X2) + E(Z2) + 2aE(XZ) = E(X2) + 1/a2 et E(X ˆX) = 1 aE(aX2 + ZX) = E(X2) ne sont pas ́egaux. 3
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 2
|
Serie 9
| 0
|
(ii) E( ˆ X2 2) = E a2Y 2 (a2 + 1)2 = E a2(aX + Z)2 (a2 + 1)2 = E (a2X2 + aZX)(a2 + 1) (a2 + 1)2 = E a(aX + Z)X a2 + 1 = E aY X a2 + 1 = E( ˆ X2X) donc ils sont ́egaux ! (iii) E( ˆ X3 2) = E(sgn2(aY )) = 1 et E( ˆ X3X) = E(sng(aY )X) < 1 ne sont pas ́egaux, car il se peut que X soit positif et aY soit n ́egatif `a cause d’un bruit plus grand que −1. Au final c’est ˆ X2 qui fonctionne. 4
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Serie 9.pdf
| 3
|
Serie 9
| 0
|
Distributed Algorithms Fall 2020 Consensus - solutions 5th exercise session, 19/10/2020 Matteo Monti <[email protected]> Jovan Komatovic <[email protected]> 1
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/4. consensus_sol.pdf
| 0
|
4. consensus_sol
| 0
|
Exercise 1 (Consensus & Perfect failure detector) Consider our fail-stop consensus algorithms (Consensus Algorithm I and Consensus Algorithm II). Explain why none of those algorithms would be correct if the failure detector turned out not to be perfect. 2
|
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/4. consensus_sol.pdf
| 1
|
4. consensus_sol
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.