· 7 years ago · Feb 21, 2019, 06:26 PM
1<HTML><BODY><p class="cover"><img src="/library/view/python-algorithms-mastering/9781484200551/images/cover.jpg" alt="image" width="667" height="822" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/cover.jpg"></p><div class="annotator-outer annotator-viewer viewer annotator-hide">
2 <ul class="annotator-widget annotator-listing"></ul>
3</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
4 <div class="annotator-outer editor">
5 <h2 class="title">Highlight</h2>
6 <form class="annotator-widget">
7 <ul class="annotator-listing">
8 <li class="annotator-item"><textarea id="annotator-field-0" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
9 <div class="annotator-controls">
10 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
11 <ul>
12 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
13 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
14 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
15 </ul>
16 </div>
17 </form>
18 </div>
19</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
20 <div class="annotator-outer">
21 <h2 class="title">Highlight</h2>
22 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
23 <div class="annotator-widget">
24 <div class="delete-confirm">
25 Are you sure you want to permanently delete this note?
26 </div>
27 <div class="annotator-controls">
28 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
29 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
30 </div>
31 </div>
32 </div>
33</div><div class="annotator-adder" style="display: none;">
34 <ul class="adders">
35
36 <li class="copy"><a href="#">Copy</a></li>
37
38 <li class="add-highlight"><a href="#">Add Highlight</a></li>
39 <li class="add-note"><a href="#">
40 Add Note
41 </a></li>
42
43 </ul>
44</div><br>**********************************<b>PAGE: 1</b>***********************************\<br><p class="booktitle">Python Algorithms</p>
45<p class="booksubtitle1">Mastering Basic Algorithms in the Python Language</p>
46<p class="booksubtitle">Second Edition</p>
47<p class="img-l"><img src="/library/view/python-algorithms-mastering/9781484200551/images/FM-00.jpg" alt="FM-00.jpg" width="100" height="27" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/FM-00.jpg"></p>
48<p class="bookauthor">Magnus Lie Hetland</p>
49<p class="img1a"><img src="/library/view/python-algorithms-mastering/9781484200551/images/FM-01.jpg" alt="FM-01.jpg" width="121" height="36" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/FM-01.jpg"></p><div class="annotator-outer annotator-viewer viewer annotator-hide">
50 <ul class="annotator-widget annotator-listing"></ul>
51</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
52 <div class="annotator-outer editor">
53 <h2 class="title">Highlight</h2>
54 <form class="annotator-widget">
55 <ul class="annotator-listing">
56 <li class="annotator-item"><textarea id="annotator-field-1" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
57 <div class="annotator-controls">
58 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
59 <ul>
60 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
61 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
62 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
63 </ul>
64 </div>
65 </form>
66 </div>
67</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
68 <div class="annotator-outer">
69 <h2 class="title">Highlight</h2>
70 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
71 <div class="annotator-widget">
72 <div class="delete-confirm">
73 Are you sure you want to permanently delete this note?
74 </div>
75 <div class="annotator-controls">
76 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
77 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
78 </div>
79 </div>
80 </div>
81</div><div class="annotator-adder" style="display: none;">
82 <ul class="adders">
83
84 <li class="copy"><a href="#">Copy</a></li>
85
86 <li class="add-highlight"><a href="#">Add Highlight</a></li>
87 <li class="add-note"><a href="#">
88 Add Note
89 </a></li>
90
91 </ul>
92</div><br>**********************************<b>PAGE: 2</b>***********************************\<br><p class="copyright"><b>Python Algorithms: Mastering Basic Algorithms in the Python Language</b></p>
93<p class="copyright">Copyright © 2014 by Magnus Lie Hetland</p>
94<p class="copyright">This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.</p>
95<p class="copyright">ISBN-13 (pbk): 978-1-4842-0056-8</p>
96<p class="copyright">ISBN-13 (electronic): 978-1-4842-0055-1</p>
97<p class="copyright">Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark.</p>
98<p class="copyright">The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.</p>
99<p class="copyright">While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.</p>
100<p class="copyright1">Publisher: Heinz Weinheimer</p>
101<p class="copyright1">Lead Editor: Steve Anglin</p>
102<p class="copyright1">Technical Reviewer: Stefan Turalski</p>
103<p class="copyright1">Editorial Board: Steve Anglin, Mark Beckner, Ewan Buckingham, Gary Cornell, Louise Corrigan, James T. DeWolf, Jonathan Gennick, Robert Hutchinson, Michelle Lowman, James Markham, Matthew Moodie, Jeff Olson, Jeffrey Pepper, Douglas Pundick, Ben Renow-Clarke, Dominic Shakeshaft, Gwenan Spearing, Matt Wade, Steve Weiss</p>
104<p class="copyright1">Development Editor: Kenyon Brown</p>
105<p class="copyright1">Coordinating Editor: Anamika Panchoo</p>
106<p class="copyright1">Copy Editor: Kim Wimpsett</p>
107<p class="copyright1">Compositor: SPi Global</p>
108<p class="copyright1">Indexer: SPi Global</p>
109<p class="copyright1">Artist: SPi Global</p>
110<p class="copyright1">Cover Designer: Anna Ishchenko</p>
111<p class="copyright1">Photo Credit: Kai T. Dragland</p>
112<p class="copyright">Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail <span class="FontName1"><a href="mailto:orders-ny@springer-sbm.com">orders-ny@springer-sbm.com</a></span>, or visit <span class="FontName1"><a href="http://www.springeronline.com">www.springeronline.com</a></span>. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.</p>
113<p class="copyright">For information on translations, please e-mail <span class="FontName1"><a href="mailto:rights@apress.com">rights@apress.com</a></span>, or visit <span class="FontName1"><a href="http://www.apress.com">www.apress.com</a></span>.</p>
114<p class="copyright">Apress and friends of ED books may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Special Bulk Sales–eBook Licensing web page at <span class="FontName1"><a href="http://www.apress.com/bulk-sales">www.apress.com/bulk-sales</a></span>.</p>
115<p class="copyright">Any source code or other supplementary material referenced by the author in this text is available to readers at <span class="FontName1"><a href="http://www.apress.com">www.apress.com</a></span>. For detailed information about how to locate your book’s source code, go to <span class="FontName1"><a href="http://www.apress.com/source-code/">www.apress.com/source-code/</a></span>.</p>
116<div class="annotator-outer annotator-viewer viewer annotator-hide">
117 <ul class="annotator-widget annotator-listing"></ul>
118</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
119 <div class="annotator-outer editor">
120 <h2 class="title">Highlight</h2>
121 <form class="annotator-widget">
122 <ul class="annotator-listing">
123 <li class="annotator-item"><textarea id="annotator-field-2" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
124 <div class="annotator-controls">
125 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
126 <ul>
127 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
128 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
129 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
130 </ul>
131 </div>
132 </form>
133 </div>
134</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
135 <div class="annotator-outer">
136 <h2 class="title">Highlight</h2>
137 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
138 <div class="annotator-widget">
139 <div class="delete-confirm">
140 Are you sure you want to permanently delete this note?
141 </div>
142 <div class="annotator-controls">
143 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
144 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
145 </div>
146 </div>
147 </div>
148</div><div class="annotator-adder" style="display: none;">
149 <ul class="adders">
150
151 <li class="copy"><a href="#">Copy</a></li>
152
153 <li class="add-highlight"><a href="#">Add Highlight</a></li>
154 <li class="add-note"><a href="#">
155 Add Note
156 </a></li>
157
158 </ul>
159</div><br>**********************************<b>PAGE: 3</b>***********************************\<br><p class="dedication">For JV, the silliest girl I know.</p><div class="annotator-outer annotator-viewer viewer annotator-hide">
160 <ul class="annotator-widget annotator-listing"></ul>
161</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
162 <div class="annotator-outer editor">
163 <h2 class="title">Highlight</h2>
164 <form class="annotator-widget">
165 <ul class="annotator-listing">
166 <li class="annotator-item"><textarea id="annotator-field-3" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
167 <div class="annotator-controls">
168 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
169 <ul>
170 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
171 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
172 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
173 </ul>
174 </div>
175 </form>
176 </div>
177</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
178 <div class="annotator-outer">
179 <h2 class="title">Highlight</h2>
180 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
181 <div class="annotator-widget">
182 <div class="delete-confirm">
183 Are you sure you want to permanently delete this note?
184 </div>
185 <div class="annotator-controls">
186 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
187 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
188 </div>
189 </div>
190 </div>
191</div><div class="annotator-adder" style="display: none;">
192 <ul class="adders">
193
194 <li class="copy"><a href="#">Copy</a></li>
195
196 <li class="add-highlight"><a href="#">Add Highlight</a></li>
197 <li class="add-note"><a href="#">
198 Add Note
199 </a></li>
200
201 </ul>
202</div><br>**********************************<b>PAGE: 4</b>***********************************\<br><p class="ChapterTitle">Contents at a Glance</p>
203<p class="toc4"><a href="9781484200568_FM_5_About_the_Author.xhtml">About the Author</a></p>
204<p class="toc4"><a href="9781484200568_FM_6_About_the_Technical_Reviewer.xhtml">About the Technical Reviewer</a></p>
205<p class="toc4"><a href="9781484200568_FM_7_Acknowledgments.xhtml">Acknowledgments</a></p>
206<p class="toc4"><a href="9781484200568_FM_4_Preface.xhtml">Preface</a></p>
207<p class="toca"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch01.xhtml">Chapter 1: Introduction</a></p>
208<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch02.xhtml">Chapter 2: The Basics</a></p>
209<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch03.xhtml">Chapter 3: Counting 101</a></p>
210<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch04.xhtml">Chapter 4: Induction and Recursion ... and Reduction</a></p>
211<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch05.xhtml">Chapter 5: Traversal: The Skeleton Key of Algorithmics</a></p>
212<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch06.xhtml">Chapter 6: Divide, Combine, and Conquer</a></p>
213<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch07.xhtml">Chapter 7: Greed Is Good? Prove It!</a></p>
214<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch08.xhtml">Chapter 8: Tangled Dependencies and Memoization</a></p>
215<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch09.xhtml">Chapter 9: From A to B with Edsger and Friends</a></p>
216<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch10.xhtml">Chapter 10: Matchings, Cuts, and Flows</a></p>
217<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch11.xhtml">Chapter 11: Hard Problems and (Limited) Sloppiness</a></p>
218<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppA.xhtml">Appendix A: Pedal to the Metal: Accelerating Python</a></p>
219<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppB.xhtml">Appendix B: List of Problems and Algorithms</a></p>
220<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppC.xhtml">Appendix C: Graph Terminology</a></p>
221<p class="toc"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppD.xhtml">Appendix D: Hints for Exercises</a></p>
222<p class="toca"><a href="9781484200568_Index.xhtml">Index</a></p><div class="annotator-outer annotator-viewer viewer annotator-hide">
223 <ul class="annotator-widget annotator-listing"></ul>
224</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
225 <div class="annotator-outer editor">
226 <h2 class="title">Highlight</h2>
227 <form class="annotator-widget">
228 <ul class="annotator-listing">
229 <li class="annotator-item"><textarea id="annotator-field-4" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
230 <div class="annotator-controls">
231 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
232 <ul>
233 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
234 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
235 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
236 </ul>
237 </div>
238 </form>
239 </div>
240</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
241 <div class="annotator-outer">
242 <h2 class="title">Highlight</h2>
243 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
244 <div class="annotator-widget">
245 <div class="delete-confirm">
246 Are you sure you want to permanently delete this note?
247 </div>
248 <div class="annotator-controls">
249 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
250 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
251 </div>
252 </div>
253 </div>
254</div><div class="annotator-adder" style="display: none;">
255 <ul class="adders">
256
257 <li class="copy"><a href="#">Copy</a></li>
258
259 <li class="add-highlight"><a href="#">Add Highlight</a></li>
260 <li class="add-note"><a href="#">
261 Add Note
262 </a></li>
263
264 </ul>
265</div><br>**********************************<b>PAGE: 5</b>***********************************\<br><p class="fmtitle">Contents</p>
266<p class="toc4"><a href="9781484200568_FM_5_About_the_Author.xhtml">About the Author</a></p>
267<p class="toc4"><a href="9781484200568_FM_6_About_the_Technical_Reviewer.xhtml">About the Technical Reviewer</a></p>
268<p class="toc4"><a href="9781484200568_FM_7_Acknowledgments.xhtml">Acknowledgments</a></p>
269<p class="toc4"><a href="9781484200568_FM_4_Preface.xhtml">Preface</a></p>
270<p class="toca"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch01.xhtml">Chapter 1: Introduction</a></p>
271<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec1">What’s All This, Then?</a></p>
272<p class="toc3"><a href="9781484200568_Ch01.xhtml#Sec2">What the book is about:</a></p>
273<p class="toc3"><a href="9781484200568_Ch01.xhtml#Sec3">What the book covers only briefly or partially:</a></p>
274<p class="toc3"><a href="9781484200568_Ch01.xhtml#Sec4">What the book <i>isn’t</i> about:</a></p>
275<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec5">Why Are You Here?</a></p>
276<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec6">Some Prerequisites</a></p>
277<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec7">What’s in This Book</a></p>
278<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec8">Summary</a></p>
279<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec9">If You’re Curious …</a></p>
280<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec10">Exercises</a></p>
281<p class="toc2"><a href="9781484200568_Ch01.xhtml#Sec11">References</a></p>
282<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch02.xhtml">Chapter 2: The Basics</a></p>
283<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec1">Some Core Ideas in Computing</a></p>
284<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec2">Asymptotic Notation</a></p>
285<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec3">It’s Greek to Me!</a></p>
286<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec4">Rules of the Road</a></p>
287<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec5">Taking the Asymptotics for a Spin</a></p>
288<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec6">Three Important Cases</a></p>
289<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec7">Empirical Evaluation of Algorithms</a></p>
290<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec8">Implementing Graphs and Trees</a></p>
291<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec9">Adjacency Lists and the Like</a></p>
292<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec10">Adjacency Matrices</a></p>
293<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec11">Implementing Trees</a></p>
294<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec12">A Multitude of Representations</a></p>
295<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec13">Beware of Black Boxes</a></p>
296<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec14">Hidden Squares</a></p>
297<p class="toc3"><a href="9781484200568_Ch02.xhtml#Sec15">The Trouble with Floats</a></p>
298<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec16">Summary</a></p>
299<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec17">If You’re Curious …</a></p>
300<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec18">Exercises</a></p>
301<p class="toc2"><a href="9781484200568_Ch02.xhtml#Sec19">References</a></p>
302<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch03.xhtml">Chapter 3: Counting 101</a></p>
303<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec1">The Skinny on Sums</a></p>
304<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec2">More Greek</a></p>
305<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec3">Working with Sums</a></p>
306<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec4">A Tale of Two Tournaments</a></p>
307<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec5">Shaking Hands</a></p>
308<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec6">The Hare and the Tortoise</a></p>
309<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec7">Subsets, Permutations, and Combinations</a></p>
310<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec8">Recursion and Recurrences</a></p>
311<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec9">Doing It by Hand</a></p>
312<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec10">A Few Important Examples</a></p>
313<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec11">Guessing and Checking</a></p>
314<p class="toc3"><a href="9781484200568_Ch03.xhtml#Sec12">The Master Theorem: A Cookie-Cutter Solution</a></p>
315<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec13">So What Was All <i>That</i> About?</a></p>
316<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec14">Summary</a></p>
317<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec15">If You’re Curious …</a></p>
318<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec16">Exercises</a></p>
319<p class="toc2"><a href="9781484200568_Ch03.xhtml#Sec17">References</a></p>
320<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch04.xhtml">Chapter 4: Induction and Recursion … and Reduction</a></p>
321<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec1">Oh, That’s Easy!</a></p>
322<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec2">One, Two, Many</a></p>
323<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec3">Mirror, Mirror</a></p>
324<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec4">Designing with Induction (and Recursion)</a></p>
325<p class="toc3"><a href="9781484200568_Ch04.xhtml#Sec5">Finding a Maximum Permutation</a></p>
326<p class="toc3"><a href="9781484200568_Ch04.xhtml#Sec6">The Celebrity Problem</a></p>
327<p class="toc3"><a href="9781484200568_Ch04.xhtml#Sec7">Topological Sorting</a></p>
328<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec8">Stronger Assumptions</a></p>
329<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec9">Invariants and Correctness</a></p>
330<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec10">Relaxation and Gradual Improvement</a></p>
331<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec11">Reduction + Contraposition = Hardness Proof</a></p>
332<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec12">Problem Solving Advice</a></p>
333<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec13">Summary</a></p>
334<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec14">If You’re Curious …</a></p>
335<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec15">Exercises</a></p>
336<p class="toc2"><a href="9781484200568_Ch04.xhtml#Sec16">References</a></p>
337<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch05.xhtml">Chapter 5: Traversal: The Skeleton Key of Algorithmics</a></p>
338<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec1">A Walk in the Park</a></p>
339<p class="toc3"><a href="9781484200568_Ch05.xhtml#Sec2">No Cycles Allowed</a></p>
340<p class="toc3"><a href="9781484200568_Ch05.xhtml#Sec3">How to Stop Walking in Circles</a></p>
341<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec4">Go Deep!</a></p>
342<p class="toc3"><a href="9781484200568_Ch05.xhtml#Sec5">Depth-First Timestamps and Topological Sorting (Again)</a></p>
343<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec6">Infinite Mazes and Shortest (Unweighted) Paths</a></p>
344<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec7">Strongly Connected Components</a></p>
345<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec8">Summary</a></p>
346<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec9">If You’re Curious …</a></p>
347<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec10">Exercises</a></p>
348<p class="toc2"><a href="9781484200568_Ch05.xhtml#Sec11">References</a></p>
349<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch06.xhtml">Chapter 6: Divide, Combine, and Conquer</a></p>
350<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec1">Tree-Shaped Problems: All About the Balance</a></p>
351<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec2">The Canonical D&C Algorithm</a></p>
352<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec3">Searching by Halves</a></p>
353<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec4">Traversing Search Trees … with Pruning</a></p>
354<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec5">Selection</a></p>
355<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec6">Sorting by Halves</a></p>
356<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec7">How Fast Can We Sort?</a></p>
357<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec8">Three More Examples</a></p>
358<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec9">Closest Pair</a></p>
359<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec10">Convex Hull</a></p>
360<p class="toc3"><a href="9781484200568_Ch06.xhtml#Sec11">Greatest Slice</a></p>
361<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec12">Tree Balance … and Balancing*</a></p>
362<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec13">Summary</a></p>
363<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec14">If You’re Curious …</a></p>
364<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec15">Exercises</a></p>
365<p class="toc2"><a href="9781484200568_Ch06.xhtml#Sec16">References</a></p>
366<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch07.xhtml">Chapter 7: Greed Is Good? Prove It!</a></p>
367<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec1">Staying Safe, Step by Step</a></p>
368<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec2">The Knapsack Problem</a></p>
369<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec3">Fractional Knapsack</a></p>
370<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec4">Integer Knapsack</a></p>
371<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec5">Huffman’s Algorithm</a></p>
372<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec6">The Algorithm</a></p>
373<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec7">The First Greedy Choice</a></p>
374<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec8">Going the Rest of the Way</a></p>
375<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec9">Optimal Merging</a></p>
376<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec10">Minimum Spanning Trees</a></p>
377<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec11">The Shortest Edge</a></p>
378<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec12">What About the Rest?</a></p>
379<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec13">Kruskal’s Algorithm</a></p>
380<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec14">Prim’s Algorithm</a></p>
381<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec15">Greed Works. But When?</a></p>
382<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec16">Keeping Up with the Best</a></p>
383<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec17">No Worse Than Perfect</a></p>
384<p class="toc3"><a href="9781484200568_Ch07.xhtml#Sec18">Staying Safe</a></p>
385<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec19">Summary</a></p>
386<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec20">If You’re Curious …</a></p>
387<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec21">Exercises</a></p>
388<p class="toc2"><a href="9781484200568_Ch07.xhtml#Sec22">References</a></p>
389<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch08.xhtml">Chapter 8: Tangled Dependencies and Memoization</a></p>
390<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec1">Don’t Repeat Yourself</a></p>
391<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec2">Shortest Paths in Directed Acyclic Graphs</a></p>
392<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec3">Longest Increasing Subsequence</a></p>
393<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec4">Sequence Comparison</a></p>
394<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec5">The Knapsack Strikes Back</a></p>
395<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec6">Binary Sequence Partitioning</a></p>
396<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec7">Summary</a></p>
397<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec8">If You’re Curious …</a></p>
398<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec9">Exercises</a></p>
399<p class="toc2"><a href="9781484200568_Ch08.xhtml#Sec10">References</a></p>
400<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch09.xhtml">Chapter 9: From A to B with Edsger and Friends</a></p>
401<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec1">Propagating Knowledge</a></p>
402<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec2">Relaxing like Crazy</a></p>
403<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec3">Finding the Hidden DAG</a></p>
404<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec4">All Against All</a></p>
405<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec5">Far-Fetched Subproblems</a></p>
406<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec6">Meeting in the Middle</a></p>
407<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec7">Knowing Where You’re Going</a></p>
408<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec8">Summary</a></p>
409<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec9">If You’re Curious …</a></p>
410<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec10">Exercises</a></p>
411<p class="toc2"><a href="9781484200568_Ch09.xhtml#Sec11">References</a></p>
412<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch10.xhtml">Chapter 10: Matchings, Cuts, and Flows</a></p>
413<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec1">Bipartite Matching</a></p>
414<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec2">Disjoint Paths</a></p>
415<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec3">Maximum Flow</a></p>
416<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec4">Minimum Cut</a></p>
417<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec5">Cheapest Flow and the Assignment Problem*</a></p>
418<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec6">Some Applications</a></p>
419<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec7">Summary</a></p>
420<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec8">If You’re Curious …</a></p>
421<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec9">Exercises</a></p>
422<p class="toc2"><a href="9781484200568_Ch10.xhtml#Sec10">References</a></p>
423<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_Ch11.xhtml">Chapter 11: Hard Problems and (Limited) Sloppiness</a></p>
424<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec1">Reduction Redux</a></p>
425<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec2">Not in Kansas Anymore?</a></p>
426<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec3">Meanwhile, Back in Kansas …</a></p>
427<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec4">But Where Do You Start? And Where Do You Go from There?</a></p>
428<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec5">A Ménagerie of Monsters</a></p>
429<p class="toc3"><a href="9781484200568_Ch11.xhtml#Sec6">Return of the Knapsack</a></p>
430<p class="toc3"><a href="9781484200568_Ch11.xhtml#Sec7">Cliques and Colorings</a></p>
431<p class="toc3"><a href="9781484200568_Ch11.xhtml#Sec8">Paths and Circuits</a></p>
432<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec9">When the Going Gets Tough, the Smart Get Sloppy</a></p>
433<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec10">Desperately Seeking Solutions</a></p>
434<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec11">And the Moral of the Story Is …</a></p>
435<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec12">Summary</a></p>
436<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec13">If You’re Curious …</a></p>
437<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec14">Exercises</a></p>
438<p class="toc2"><a href="9781484200568_Ch11.xhtml#Sec15">References</a></p>
439<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppA.xhtml">Appendix A: Pedal to the Metal: Accelerating Python</a></p>
440<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppB.xhtml">Appendix B: List of Problems and Algorithms</a></p>
441<p class="toc2"><a href="9781484200568_AppB.xhtml#Sec1">Problems</a></p>
442<p class="toc2"><a href="9781484200568_AppB.xhtml#Sec2">Algorithms and Data Structures</a></p>
443<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppC.xhtml">Appendix C: Graph Terminology</a></p>
444<p class="toc1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg" alt="image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq1.jpg"> <a href="9781484200568_AppD.xhtml">Appendix D: Hints for Exercises</a></p>
445<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec1">Chapter 1</a></p>
446<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec2">Chapter 2</a></p>
447<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec3">Chapter 3</a></p>
448<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec4">Chapter 4</a></p>
449<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec5">Chapter 5</a></p>
450<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec6">Chapter 6</a></p>
451<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec7">Chapter 7</a></p>
452<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec8">Chapter 8</a></p>
453<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec9">Chapter 9</a></p>
454<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec10">Chapter 10</a></p>
455<p class="toc2"><a href="9781484200568_AppD.xhtml#Sec11">Chapter 11</a></p>
456<p class="toca"><a href="9781484200568_Index.xhtml">Index</a></p><div class="annotator-outer annotator-viewer viewer annotator-hide">
457 <ul class="annotator-widget annotator-listing"></ul>
458</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
459 <div class="annotator-outer editor">
460 <h2 class="title">Highlight</h2>
461 <form class="annotator-widget">
462 <ul class="annotator-listing">
463 <li class="annotator-item"><textarea id="annotator-field-5" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
464 <div class="annotator-controls">
465 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
466 <ul>
467 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
468 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
469 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
470 </ul>
471 </div>
472 </form>
473 </div>
474</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
475 <div class="annotator-outer">
476 <h2 class="title">Highlight</h2>
477 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
478 <div class="annotator-widget">
479 <div class="delete-confirm">
480 Are you sure you want to permanently delete this note?
481 </div>
482 <div class="annotator-controls">
483 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
484 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
485 </div>
486 </div>
487 </div>
488</div><div class="annotator-adder" style="display: none;">
489 <ul class="adders">
490
491 <li class="copy"><a href="#">Copy</a></li>
492
493 <li class="add-highlight"><a href="#">Add Highlight</a></li>
494 <li class="add-note"><a href="#">
495 Add Note
496 </a></li>
497
498 </ul>
499</div><br>**********************************<b>PAGE: 6</b>***********************************\<br><p class="ChapterTitle">About the Author</p>
500<p class="img_l"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFM-01.jpg" alt="9781484200568_unFM-01.jpg" width="250" height="227" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFM-01.jpg"></p>
501<p class="noindent"> <b>Magnus Lie Hetland</b> is an experienced Python programmer, having used the language since the late 1990s. He is also an associate professor of algorithms at the Norwegian University of Science and Technology and has taught algorithms for more than a decade. Hetland is the author of <i>Beginning Python</i>.</p><div class="annotator-outer annotator-viewer viewer annotator-hide">
502 <ul class="annotator-widget annotator-listing"></ul>
503</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
504 <div class="annotator-outer editor">
505 <h2 class="title">Highlight</h2>
506 <form class="annotator-widget">
507 <ul class="annotator-listing">
508 <li class="annotator-item"><textarea id="annotator-field-6" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
509 <div class="annotator-controls">
510 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
511 <ul>
512 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
513 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
514 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
515 </ul>
516 </div>
517 </form>
518 </div>
519</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
520 <div class="annotator-outer">
521 <h2 class="title">Highlight</h2>
522 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
523 <div class="annotator-widget">
524 <div class="delete-confirm">
525 Are you sure you want to permanently delete this note?
526 </div>
527 <div class="annotator-controls">
528 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
529 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
530 </div>
531 </div>
532 </div>
533</div><div class="annotator-adder" style="display: none;">
534 <ul class="adders">
535
536 <li class="copy"><a href="#">Copy</a></li>
537
538 <li class="add-highlight"><a href="#">Add Highlight</a></li>
539 <li class="add-note"><a href="#">
540 Add Note
541 </a></li>
542
543 </ul>
544</div><br>**********************************<b>PAGE: 7</b>***********************************\<br><p class="ChapterTitle"><b>About the Technical Reviewer</b></p>
545<p class="img_l"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFM-02.jpg" alt="9781484200568_unFM-02.jpg" width="250" height="357" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFM-02.jpg"></p>
546<p class="noindent"><b>Stefan Turalski</b> is just another coder who is perfectly happy delivering pragmatic, not necessarily software, solutions and climbing the impassable learning curve.</p>
547<p class="indent">He has more than a decade of experience building solutions in such diverse domains as knowledge management, embedded networking, healthcare, power and gas trading, and, in the last few years, finance.</p>
548<p class="indent">Focusing on code optimization and systems integration, he has dabbled (or almost drowned) in quite a few programming languages and has abused a number of open source and commercial software frameworks, libraries, servers, and so on.</p>
549<p class="indent">Stefan is currently working on a highly scalable, low-latency, intraday risk valuation system at a financial institution in London. His latest interests revolve around functional and reactive programming, F#, Clojure, Python, OpenCL, and WebGL.</p>
550<p class="indent">He still cannot believe that he was trusted enough to help on the second edition of Magnus Lie Hetland’s superb book. Stefan hopes that his (and your) brain cells injured while studying the algorithmic problems covered by the author will recover stronger and wiser!</p><div class="annotator-outer annotator-viewer viewer annotator-hide">
551 <ul class="annotator-widget annotator-listing"></ul>
552</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
553 <div class="annotator-outer editor">
554 <h2 class="title">Highlight</h2>
555 <form class="annotator-widget">
556 <ul class="annotator-listing">
557 <li class="annotator-item"><textarea id="annotator-field-7" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
558 <div class="annotator-controls">
559 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
560 <ul>
561 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
562 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
563 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
564 </ul>
565 </div>
566 </form>
567 </div>
568</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
569 <div class="annotator-outer">
570 <h2 class="title">Highlight</h2>
571 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
572 <div class="annotator-widget">
573 <div class="delete-confirm">
574 Are you sure you want to permanently delete this note?
575 </div>
576 <div class="annotator-controls">
577 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
578 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
579 </div>
580 </div>
581 </div>
582</div><div class="annotator-adder" style="display: none;">
583 <ul class="adders">
584
585 <li class="copy"><a href="#">Copy</a></li>
586
587 <li class="add-highlight"><a href="#">Add Highlight</a></li>
588 <li class="add-note"><a href="#">
589 Add Note
590 </a></li>
591
592 </ul>
593</div><br>**********************************<b>PAGE: 8</b>***********************************\<br><p class="ChapterTitle">Acknowledgments</p>
594<p class="noindent">Thanks to everyone who contributed to this book, either directly or indirectly. This certainly includes my algorithm mentors, Arne Halaas and Bjørn Olstad, as well as the entire crew at Apress and my brilliant tech editors, Alex Martelli (for the first edition) and Stefan Turalski. Thanks to all the readers who pointed out errors in the first edition; I hope I have corrected most of them. I’d especially like to thank Gerald Senarclens de Grancy, who supplied an extensive, well-annotated list of errata covering the entire book. Thanks to Nils Grimsmo, Jon Marius Venstad, Ole Edsberg, Rolv Seehuus, and Jorg Rødsjø for useful input; to my girlfriend, Janne Varvára Seem, my parents, Kjersti Lie and Tor M. Hetland, and my sister, Anne Lie-Hetland, for their interest and support; and to my uncle Axel, for checking my French. Finally, a big thank-you to the Python Software Foundation for their permission to reproduce parts of the Python standard library and to Randall Munroe for letting me include some of his wonderful XKCD comics.</p><div class="annotator-outer annotator-viewer viewer annotator-hide">
595 <ul class="annotator-widget annotator-listing"></ul>
596</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
597 <div class="annotator-outer editor">
598 <h2 class="title">Highlight</h2>
599 <form class="annotator-widget">
600 <ul class="annotator-listing">
601 <li class="annotator-item"><textarea id="annotator-field-8" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
602 <div class="annotator-controls">
603 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
604 <ul>
605 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
606 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
607 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
608 </ul>
609 </div>
610 </form>
611 </div>
612</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
613 <div class="annotator-outer">
614 <h2 class="title">Highlight</h2>
615 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
616 <div class="annotator-widget">
617 <div class="delete-confirm">
618 Are you sure you want to permanently delete this note?
619 </div>
620 <div class="annotator-controls">
621 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
622 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
623 </div>
624 </div>
625 </div>
626</div><div class="annotator-adder" style="display: none;">
627 <ul class="adders">
628
629 <li class="copy"><a href="#">Copy</a></li>
630
631 <li class="add-highlight"><a href="#">Add Highlight</a></li>
632 <li class="add-note"><a href="#">
633 Add Note
634 </a></li>
635
636 </ul>
637</div><br>**********************************<b>PAGE: 9</b>***********************************\<br><p class="ChapterTitle">Preface</p>
638<p class="noindent">This book is a marriage of three of my passions: algorithms, Python programming, and explaining things. To me, all three of these are about aesthetics—finding just the right way of doing something, looking until you uncover a hint of elegance, and then polishing that until it shines (or at least until it is a bit shinier). Of course, when there’s a lot of material to cover, you may not get to polish things quite as much as you want. Luckily, though, most of the content in this book is prepolished because I’m writing about really beautiful algorithms and proofs, as well as one of the cutest programming languages out there. As for the third part, I’ve tried hard to find explanations that will make things seem as obvious as possible. Even so, I’m sure I have failed in many ways, and if you have suggestions for improving the book, I’d be happy to hear from you. Who knows, maybe some of your ideas could make it into a future edition. For now, though, I hope you have fun with what’s here and that you take any newfound insight and run with it. If you can, use it to make the world a more awesome place, in whatever way seems right.</p><div class="annotator-outer annotator-viewer viewer annotator-hide">
639 <ul class="annotator-widget annotator-listing"></ul>
640</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
641 <div class="annotator-outer editor">
642 <h2 class="title">Highlight</h2>
643 <form class="annotator-widget">
644 <ul class="annotator-listing">
645 <li class="annotator-item"><textarea id="annotator-field-9" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
646 <div class="annotator-controls">
647 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
648 <ul>
649 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
650 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
651 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
652 </ul>
653 </div>
654 </form>
655 </div>
656</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
657 <div class="annotator-outer">
658 <h2 class="title">Highlight</h2>
659 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
660 <div class="annotator-widget">
661 <div class="delete-confirm">
662 Are you sure you want to permanently delete this note?
663 </div>
664 <div class="annotator-controls">
665 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
666 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
667 </div>
668 </div>
669 </div>
670</div><div class="annotator-adder" style="display: none;">
671 <ul class="adders">
672
673 <li class="copy"><a href="#">Copy</a></li>
674
675 <li class="add-highlight"><a href="#">Add Highlight</a></li>
676 <li class="add-note"><a href="#">
677 Add Note
678 </a></li>
679
680 </ul>
681</div><br>**********************************<b>PAGE: 10</b>***********************************\<br><p class="ChapterNumber"><a id="b9781484200568_1"></a>CHAPTER 1</p>
682<p class="Chapimage"><img src="/library/view/python-algorithms-mastering/9781484200551/images/frontdot.jpg" alt="image" width="82" height="25" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/frontdot.jpg"></p>
683<p class="ChapterTitle">Introduction</p>
684<div>
685<ol class="OrderedList">
686<li>Write down the problem.</li>
687<li>Think real hard.</li>
688<li>Write down the solution.</li>
689</ol>
690<p class="noindent5s">— “The Feynman Algorithm†<br>as described by Murray Gell-Mann</p>
691<p class="noindent">Consider the following problem: You are to visit all the cities, towns, and villages of, say, Sweden and then return to your starting point. This might take a while (there are 24,978 locations to visit, after all), so you want to minimize your route. You plan on visiting each location exactly once, following the shortest route possible. As a programmer, you certainly don’t want to plot the route by hand. Rather, you try to write some code that will plan your trip for you. For some reason, however, you can’t seem to get it right. A straightforward program works well for a smaller number of towns and cities but seems to run forever on the actual problem, and improving the program turns out to be surprisingly hard. How come?</p>
692<p class="indent">Actually, in 2004, a team of five researchers<a id="_Fn1" href="#Fn1" class="totri-footnote"><sup>1</sup></a> found such a tour of Sweden, after a number of other research teams had tried and failed. The five-man team used cutting-edge software with lots of clever optimizations and tricks of the trade, running on a cluster of 96 Xeon 2.6GHz workstations. Their software ran from March 2003 until May 2004, before it finally printed out the optimal solution. Taking various interruptions into account, the team estimated that the total CPU time spent was about <i>85 years</i>!</p>
693<p class="indent">Consider a similar problem: You want to get from Kashgar, in the westernmost region of China, to Ningbo, on the east coast, following the shortest route possible.<a id="_Fn2" href="#Fn2" class="totri-footnote"><sup>2</sup></a> Now, China has 3,583,715 km of roadways and 77,834 km of railways, with millions of intersections to consider and a virtually unfathomable number of possible routes to follow. It might seem that this problem is related to the previous one, yet this <i>shortest path</i> problem is one solved routinely, with no appreciable delay, by GPS software and online map services. If you give those two cities to your favorite map service, you should get the shortest route in mere moments. What’s going on here?</p>
694<p class="indent">You will learn more about both of these problems later in the book; the first one is called the <a id="cXXX.170"></a><i>traveling salesman</i> (or <i>salesrep</i>) <i>problem</i> and is covered in <a href="9781484200568_Ch11.xhtml">Chapter 11</a>, while so-called shortest path problems are primarily dealt with in <a href="9781484200568_Ch09.xhtml">Chapter 9</a>. I also hope you will gain a rather deep insight into why one problem seems like such a hard nut to crack while the other admits several well-known, efficient solutions. More importantly, you will learn something about how to deal with algorithmic and computational problems in general, either solving them efficiently, using one of the several techniques and algorithms you encounter in this book, or showing that they are too hard and that approximate solutions may be all you can hope for. This chapter briefly describes what the book is about—what you can expect and what is expected of you. It also outlines the specific contents of the various chapters to come in case you want to skip around.</p>
695<p id="Sec1" class="Heading1">What’s All This, Then?</p>
696<p class="noindent">This is a book about algorithmic problem solving for Python programmers. Just like books on, say, object-oriented patterns, the problems it deals with are of a general nature—as are the solutions. For an algorist, there is more to the job than simply implementing or executing an existing algorithm, however. You are expected to come up with <i>new</i> algorithms—new <i>general solutions</i> to hitherto unseen, general problems. In this book, you are going to learn principles for constructing such solutions.</p>
697<p class="indent">This is not your typical algorithm book<a id="cXXX.01a"></a><a id="cXXX.171"></a>, though. Most of the authoritative books on the subject (such as Knuth’s classics or the industry-standard textbook by Cormen et al.) have a heavy formal and theoretical slant, even though some of them (such as the one by Kleinberg and Tardos) lean more in the direction of readability. Instead of trying to replace any of these excellent books, I’d like to <i>supplement</i> them. Building on my experience from teaching algorithms, I try to explain as clearly as possible how the algorithms work and what common principles underlie many of them. For a programmer, these explanations are probably enough. Chances are you’ll be able to understand why the algorithms are correct and how to adapt them to new problems you may come to face. If, however, you need the full depth of the more formalistic and encyclopedic textbooks, I hope the foundation you get in this book will help you understand the theorems and proofs you encounter there.</p>
698<div class="notepara">
699<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> One difference between this book and other textbooks on algorithms is that I adopt a rather conversational tone. While I hope this appeals to at least some of my readers, it may not be your cup of tea. Sorry about that—but now you have, at least, been warned.</p></div>
700<p class="indent">There is another genre of algorithm books as well: the “(Data Structures and) Algorithms in <i>blank</i>†kind, where the <i>blank</i> is the author’s favorite programming language. There are quite a few of these (especially for <i>blank</i> = Java, it seems), but many of them focus on relatively basic data structures, to the detriment of the meatier stuff. This is understandable if the book is designed to be used in a basic course on data structures, for example, but for a Python programmer, learning about singly and doubly linked lists may not be all that exciting (although you <i>will</i> hear a bit about those in the next chapter). And even though techniques such as hashing are highly important, you get hash tables for free in the form of Python dictionaries; there’s no need to implement them from scratch. Instead, I focus on more high-level algorithms. Many important concepts that are available as <a id="cXXX.172"></a>black-box implementations either in the Python language itself or in the standard library (such as sorting, searching, and hashing) are explained more briefly, in special “Black Box†sidebars throughout the text.</p>
701<p class="indent">There is, of course, another factor that separates this book from those in the “Algorithms in Java/C/C++/C#†genre, namely, that the <i>blank</i> is Python. This places the book one step closer to the language-independent books (such as those by Knuth,<a id="_Fn3" href="#Fn3" class="totri-footnote"><sup>3</sup></a> Cormen et al., and Kleinberg and Tardos, for example), which often use <a id="cXXX.173"></a><i>pseudocode</i>, the kind of fake programming language that is designed to be readable rather than executable. One of Python’s distinguishing features is its readability; it is, more or less, executable pseudocode. Even if you’ve never programmed in Python, you could probably decipher the meaning of most basic Python programs. The code in this book is designed to be readable exactly in this fashion—you need not be a Python expert to understand the examples (although you might need to look up some built-in functions and the like). And if you want to pretend the examples are actually pseudocode, feel free to do so. To sum up ...</p>
702<p id="Sec2" class="Heading2">What the book is about:</p>
703<ul class="bulleted">
704<li>Algorithm analysis, with a focus on asymptotic running time</li>
705<li>Basic principles of algorithm design</li>
706<li>How to represent commonly used data structures in Python</li>
707<li>How to implement well-known algorithms in Python</li>
708</ul>
709<p id="Sec3" class="Heading2">What the book covers only briefly or partially:</p>
710<ul class="bulleted">
711<li>Algorithms that are directly available in Python, either as part of the language or via the standard library</li>
712<li>Thorough and deep formalism (although the book has its share of proofs and proof-like explanations)</li>
713</ul>
714<p id="Sec4" class="Heading2">What the book <i>isn’t</i> about:</p>
715<ul class="bulleted">
716<li>Numerical or number-theoretical algorithms (except for some floating-point hints in <a href="9781484200568_Ch02.xhtml">Chapter 2</a>)</li>
717</ul>
718<ul class="bulleted">
719<li>Parallel algorithms and multicore programming</li>
720</ul>
721<p class="indent">As you can see, “implementing things in Python†is just part of the picture. The design principles and theoretical foundations are included in the hope that they’ll help you design your <i>own</i> algorithms and data structures.</p>
722<p id="Sec5" class="Heading1">Why Are You Here?<a id="cXXX.174"></a></p>
723<p class="noindent">When working with algorithms, you’re trying to solve problems <a id="cXXX.175"></a><i>efficiently</i>. Your programs should be fast; the wait for a solution should be short. But what, exactly, do I mean by efficient, <a id="cXXX.176"></a>fast, and short? And why would you care about these things in a language such as Python, which isn’t exactly lightning-fast to begin with? Why not rather switch to, say, C or Java?</p>
724<p class="indent">First, Python is a lovely language, and you may not <i>want</i> to switch. Or maybe you have no choice in the matter. But second, and perhaps most importantly, algorists don’t primarily worry about <i>constant</i> differences in performance.<a id="_Fn4" href="#Fn4" class="totri-footnote"><sup>4</sup></a> If one program takes twice, or even ten times, as long as another to finish, it may still be <i>fast enough</i>, and the slower program (or language) may have other desirable properties, such as being more readable. Tweaking and optimizing can be costly in many ways and is not a task to be taken on lightly. What <i>does</i> matter, though, no matter the language, is how your program <a id="cXXX.177"></a><i>scales</i>. If you double the size of your input, what happens? Will your program run for twice as long? Four times? More? Will the running time double even if you add just <i>one measly bit</i> to the input? These are the kind of differences that will easily trump language or hardware choice, if your problems get big enough. And in some cases “big enough†needn’t be all that big. Your main weapon in whittling down the growth of your running time is—you guessed it—a solid understanding of algorithm design.<a id="cXXX.178"></a></p>
725<p class="indent">Let’s try a little experiment. Fire up an interactive Python interpreter, and enter the following:</p>
726<pre><span class="FontName1">>>> count = 10**5</span><br><span class="FontName1">>>> nums = []</span><br><span class="FontName1">>>> for i in range(count):</span><br><span class="FontName1">... nums.append(i)</span><br><span class="FontName1">...</span><br><span class="FontName1">>>> nums.reverse()</span></pre>
727<p class="indent">Not the most useful piece of code, perhaps. It simply appends a bunch of numbers to an (initially) empty list and then reverses that list. In a more realistic situation, the numbers might come from some outside source (they could be incoming connections to a server, for example), and you want to add them to your list in reverse order, perhaps to prioritize the most recent ones. Now you get an idea: instead of reversing the list at the end, couldn’t you just insert the numbers at the beginning, as they appear? Here’s an attempt to streamline the code (continuing in the same interpreter window):</p>
728<pre><span class="FontName1">>>> nums = []</span><br><span class="FontName1">>>> for i in range(count):</span><br><span class="FontName1">... nums.insert(0, i)</span></pre>
729<p class="indent">Unless you’ve encountered this situation before, the new code might look promising, but try to run it. Chances are you’ll notice a distinct slowdown. On my computer, the second piece of code takes around 200 times as long as the first to finish.<a id="_Fn5" href="#Fn5" class="totri-footnote"><sup>5</sup></a> Not only is it slower, but it also <i>scales</i> worse with the problem size. Try, for example, to increase <span class="FontName1">count</span> from <span class="FontName1">10**5</span> to <span class="FontName1">10**6</span>. As expected, this increases the running time for the first piece of code by a factor of about ten … but the second version is slowed by roughly <i>two</i> orders of magnitude, making it more than <i>two thousand times slower</i> than the first! As you can probably guess, the discrepancy between the two versions only increases as the problem gets bigger, making the choice between them ever more crucial.</p>
730<div class="notepara">
731<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> This is an example of linear vs. quadratic growth, a topic dealt with in detail in <a href="9781484200568_Ch03.xhtml">Chapter 3</a>. The specific issue underlying the quadratic growth is explained in the discussion of vectors (or dynamic arrays) in the “Black Box†sidebar on <span class="FontName1">list</span> in <a href="9781484200568_Ch02.xhtml">Chapter 2</a>.</p></div>
732<p id="Sec6" class="Heading1">Some Prerequisites<a id="cXXX.179"></a></p>
733<p class="noindent">This book is intended for two groups of people: Python programmers, who want to beef up their algorithmics, and students taking algorithm courses, who want a supplement to their plain-vanilla algorithms textbook. Even if you belong to the latter group, I’m assuming you have a familiarity with programming in general and with Python in particular. If you don’t, perhaps my book <i>Beginning Python</i> can help? The Python web site also has a lot of useful material, and Python is a really easy language to learn. There is some math in the pages ahead, but you don’t have to be a math prodigy to follow the text. You’ll be dealing with some simple sums and nifty concepts such as polynomials, exponentials, and logarithms, but I’ll explain it all as we go along.</p>
734<p class="indent">Before heading off into the mysterious and wondrous lands of computer science, you should have your equipment ready. As a Python programmer, I assume you have your own favorite text/code editor or integrated development environment—I’m not going to interfere with that. When it comes to Python versions, the book is written to be reasonably version-independent, meaning that most of the code should work with both the Python 2 and 3 series. Where backward-incompatible Python 3 features are used, there will be explanations on how to implement the algorithm in Python 2 as well. (And if, for some reason, you’re still stuck with, say, the Python 1.5 series, most of the code should still work, with a tweak here and there.)</p>
735<div class="Singlethin">
736<p class="Heading4a"><b>GETTING WHAT YOU NEED</b></p>
737<p class="box-left">In some operating systems, such as Mac OS X and several flavors of Linux, Python should already be installed. If it is not, most Linux distributions will let you install the software you need through some form of package manager. If you want or need to install Python manually, you can find all you need on the Python web site, <span class="FontName1"><a href="http://python.org">http://python.org</a></span>.</p></div>
738<p id="Sec7" class="Heading1">What’s in This Book</p>
739<p class="noindent">The book is structured as follows:</p>
740<ul class="bulleted">
741<li><b><a href="9781484200568_Ch01.xhtml">Chapter 1</a>: Introduction.</b> You’ve already gotten through most of this. It gives an overview of the book.</li>
742<li><b><a href="9781484200568_Ch02.xhtml">Chapter 2</a>: The Basics.</b> This covers the basic concepts and terminology, as well as some fundamental math. Among other things, you learn how to be sloppier with your formulas than ever before, and still get the right results, using asymptotic notation.</li>
743<li><b><a href="9781484200568_Ch03.xhtml">Chapter 3</a>: Counting 101.</b> More math—but it’s really fun math, I promise! There’s some basic combinatorics for analyzing the running time of algorithms, as well as a gentle introduction to recursion and recurrence relations.</li>
744<li><b><a href="9781484200568_Ch04.xhtml">Chapter 4</a>: Induction and Recursion … and Reduction.</b> The three terms in the title are crucial, and they are closely related. Here we work with induction and recursion, which are virtually mirror images of each other, both for designing new algorithms and for proving correctness. We’ll also take a somewhat briefer look at the idea of reduction, which runs as a common thread through almost all algorithmic work.</li>
745<li><b><a href="9781484200568_Ch05.xhtml">Chapter 5</a>: Traversal: A Skeleton Key to Algorithmics.</b> Traversal can be understood using the ideas of induction and recursion, but it is in many ways a more concrete and specific technique. Several of the algorithms in this book are simply augmented traversals, so mastering this idea will give you a real jump start.</li>
746<li><b><a href="9781484200568_Ch06.xhtml">Chapter 6</a>: Divide, Combine, and Conquer.</b> When problems can be decomposed into independent subproblems, you can recursively solve these subproblems and usually get efficient, correct algorithms as a result. This principle has several applications, not all of which are entirely obvious, and it is a mental tool well worth acquiring.</li>
747<li><b><a href="9781484200568_Ch07.xhtml">Chapter 7</a>: Greed is Good? Prove It!</b> Greedy algorithms are usually easy to construct. It is even possible to formulate a general scheme that most, if not all, greedy algorithms follow, yielding a plug-and-play solution. Not only are they easy to construct, but they are usually very efficient. The problem is, it can be hard to show that they are correct (and often they aren’t). This chapter deals with some well-known examples and some more general methods for constructing correctness proofs.</li>
748<li><b><a href="9781484200568_Ch08.xhtml">Chapter 8</a>: Tangled Dependencies and Memoization.</b> This chapter is about the design method (or, historically, the problem) called, somewhat confusingly, dynamic programming. It is an advanced technique that can be hard to master but that also yields some of the most enduring insights and elegant solutions in the field.</li>
749<li><b><a href="9781484200568_Ch09.xhtml">Chapter 9</a>: From A to B with Edsger and Friends.</b> Rather than the design methods of the previous three chapters, the focus is now on a specific problem, with a host of applications: finding shortest paths in networks, or graphs. There are many variations of the problem, with corresponding (beautiful) algorithms.</li>
750<li><b><a href="9781484200568_Ch10.xhtml">Chapter 10</a>: Matchings, Cuts, and Flows.</b> How do you match, say, students with colleges so you maximize total satisfaction? In an online community, how do you know whom to trust? And how do you find the total capacity of a road network? These, and several other problems, can be solved with a small class of closely related algorithms and are all variations of the maximum flow problem, which is covered in this chapter.</li>
751<li><b><a href="9781484200568_Ch11.xhtml">Chapter 11</a>: Hard Problems and (Limited) Sloppiness.</b> As alluded to in the beginning of the introduction, there are problems we don’t know how to solve efficiently and that we have reasons to think won’t be solved for a long time—maybe never. In this chapter, you learn how to apply the trusty tool of reduction in a new way: not to <i>solve</i> problems but to show that they are <i>hard</i>. Also, we take a look at how a bit of (strictly limited) sloppiness in the optimality criteria can make problems a lot easier to solve.</li>
752<li><b><a href="9781484200568_AppA.xhtml">Appendix A</a>: Pedal to the Metal: Accelerating Python.</b> The main focus of this book is asymptotic efficiency—making your programs scale well with problem size. However, in some cases, that may not be enough. This appendix gives you some pointers to tools that can make your Python programs go faster. Sometimes a <i>lot</i> (as in hundreds of times) faster.</li>
753<li><b><a href="9781484200568_AppB.xhtml">Appendix B</a>: List of Problems and Algorithms.</b> This appendix gives you an overview of the algorithmic problems and algorithms discussed in the book, with some extra information to help you select the right algorithm for the problem at hand.</li>
754<li><b><a href="9781484200568_AppC.xhtml">Appendix C</a>: Graph Terminology and Notation.</b> Graphs are a really useful structure, both in describing real-world systems and in demonstrating how various algorithms work. This chapter gives you a tour of the basic concepts and lingo, in case you haven’t dealt with graphs before.</li>
755<li><b><a href="9781484200568_AppD.xhtml">Appendix D</a>: Hints for Exercises.</b> Just what the title says.</li>
756</ul>
757<p id="Sec8" class="Heading1">Summary</p>
758<p class="noindent">Programming isn’t just about software architecture and object-oriented design; it’s also about solving algorithmic problems, some of which are really hard. For the more run-of-the-mill problems (such as finding the shortest path from A to B), the algorithm you use or design can have a huge impact on the time your code takes to finish, and for the hard problems (such as finding the shortest route through A–Z), there may not even <i>be</i> an efficient algorithm, meaning that you need to accept approximate solutions.</p>
759<p class="indent">This book will teach you several well-known algorithms, along with general principles that will help you create your own. Ideally, this will let you solve some of the more challenging problems out there, as well as create programs that scale gracefully with problem size. In the next chapter, we get started with the basic concepts of algorithmics, dealing with terms that will be used throughout the entire book.</p>
760<p id="Sec9" class="Heading1">If You’re Curious …</p>
761<p class="noindent">This is a section you’ll see in all the chapters to come. It’s intended to give you some hints about details, wrinkles, or advanced topics that have been omitted or glossed over in the main text and to point you in the direction of further information. For now, I’ll just refer you to the “References†section, later in this chapter, which gives you details about the algorithm books mentioned in the main text.</p>
762<p id="Sec10" class="Heading1">Exercises</p>
763<p class="noindent">As with the previous section, this is one you’ll encounter again and again. Hints for solving the exercises can be found in <a href="9781484200568_AppD.xhtml">Appendix D</a>. The exercises often tie in with the main text, covering points that aren’t explicitly discussed there but that may be of interest or that deserve some contemplation. If you want to really sharpen your algorithm design skills, you might also want to check out some of the myriad of sources of programming puzzles out there. There are, for example, lots of programming contests (a web search should turn up plenty), many of which post problems that you can play with. Many big software companies also have qualification tests based on problems such as these and publish some of them online.</p>
764<p class="indent">Because the introduction doesn’t cover that much ground, I’ll just give you a couple of exercises here—a taste of what’s to come:</p>
765<ul class="bulleted1">
766<li>1-1. Consider the following statement: “As machines get faster and memory cheaper, algorithms become less important.†What do you think; is this true or false? Why?</li>
767<li>1-2. Find a way of checking whether two strings are anagrams of each other (such as <span class="FontName1">"debit card"</span> and <span class="FontName1">"bad credit"</span>). How well do you think your solution scales? Can you think of a naïve solution that will scale poorly?</li>
768</ul>
769<p id="Sec11" class="Heading1">References</p>
770<p class="ref">Applegate, D., Bixby, R., Chvátal, V., Cook, W., and Helsgaun, K. Optimal tour of Sweden. <span class="FontName1"><a href="http://www.math.uwaterloo.ca/tsp/sweden/">www.math.uwaterloo.ca/tsp/sweden/</a></span>. Accessed April 6, 2014.</p>
771<p class="ref">Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. (2009). <i>Introduction to Algorithms</i>, second edition. MIT Press.</p>
772<p class="ref">Dasgupta, S., Papadimitriou, C., and Vazirani, U. (2006). <i>Algorithms</i>. McGraw-Hill.</p>
773<p class="ref">Goodrich, M. T. and Tamassia, R. (2001). <i>Algorithm Design: Foundations, Analysis, and Internet Examples</i>. John Wiley & Sons, Ltd.</p>
774<p class="ref">Hetland, M. L. (2008). <i>Beginning Python: From Novice to Professional</i>, second edition. Apress.</p>
775<p class="ref">Kleinberg, J. and Tardos, E. (2005). <i>Algorithm Design</i>. Addison-Wesley Longman Publishing Co., Inc.</p>
776<p class="ref">Knuth, D. E. (1968). Fundamental Algorithms, volume 1 of <i>The Art of Computer Programming</i>. Addison-Wesley.</p>
777<p class="ref">———. (1969). Seminumerical Algorithms, volume 2 of <i>The Art of Computer Programming</i>. Addison-Wesley.</p>
778<p class="ref">———. (1973). <i>Sorting and Searching</i>, volume 3 of <i>The Art of Computer Programming</i>. Addison-Wesley.</p>
779<p class="ref">———. (2011). <i>Combinatorial Algorithms, Part 1</i>, volume 4A of <i>The Art of Computer Programming</i>. Addison-Wesley.</p>
780<p class="ref">Miller, B. N. and Ranum, D. L. (2005). <i>Problem Solving with Algorithms and Data Structures Using Python</i>. Franklin Beedle & Associates.</p>
781<p class="FootLine">__________________</p>
782<p class="FootNote"><a id="Fn1" href="#_Fn1" class="totri-footnote"><sup>1</sup></a>David Applegate, Robert Bixby, Vašek Chvátal, William Cook, and Keld Helsgaun</p>
783<p class="FootNote"><a id="Fn2" href="#_Fn2" class="totri-footnote"><sup>2</sup></a>Let’s assume that flying isn’t an option.</p>
784<p class="FootNote"><a id="Fn3" href="#_Fn3" class="totri-footnote"><sup>3</sup></a>Knuth is also well-known for using assembly code for an abstract computer of his own design.</p>
785<p class="FootNote"><a id="Fn4" href="#_Fn4" class="totri-footnote"><sup>4</sup></a>I’m talking about constant multiplicative factors here, such as doubling or halving the execution time.</p>
786<p class="FootNote"><a id="Fn5" href="#_Fn5" class="totri-footnote"><sup>5</sup></a>See <a href="9781484200568_Ch02.xhtml">Chapter 2</a> for more on benchmarking and empirical evaluation of algorithms.</p></div><div class="annotator-outer annotator-viewer viewer annotator-hide">
787 <ul class="annotator-widget annotator-listing"></ul>
788</div><div class="annotator-modal-wrapper annotator-editor-modal annotator-editor annotator-hide">
789 <div class="annotator-outer editor">
790 <h2 class="title">Highlight</h2>
791 <form class="annotator-widget">
792 <ul class="annotator-listing">
793 <li class="annotator-item"><textarea id="annotator-field-10" placeholder="Add a note using markdown (optional)" class="js-editor" maxlength="750"></textarea></li></ul>
794 <div class="annotator-controls">
795 <a class="link-to-markdown" href="https://daringfireball.net/projects/markdown/basics" target="_blank">?</a>
796 <ul>
797 <li class="delete annotator-hide"><a href="#delete" class="annotator-delete-note button positive">Delete Note</a></li>
798 <li class="save"><a href="#save" class="annotator-save annotator-focus button positive">Save Note</a></li>
799 <li class="cancel"><a href="#cancel" class="annotator-cancel button">Cancel</a></li>
800 </ul>
801 </div>
802 </form>
803 </div>
804</div><div class="annotator-modal-wrapper annotator-delete-confirm-modal" style="display: none;">
805 <div class="annotator-outer">
806 <h2 class="title">Highlight</h2>
807 <a class="js-close-delete-confirm annotator-cancel close" href="#close">Close</a>
808 <div class="annotator-widget">
809 <div class="delete-confirm">
810 Are you sure you want to permanently delete this note?
811 </div>
812 <div class="annotator-controls">
813 <a href="#cancel" class="annotator-cancel button js-cancel-delete-confirm">No, I changed my mind</a>
814 <a href="#delete" class="annotator-delete button positive js-delete-confirm">Yes, delete it</a>
815 </div>
816 </div>
817 </div>
818</div><div class="annotator-adder" style="display: none;">
819 <ul class="adders">
820
821 <li class="copy"><a href="#">Copy</a></li>
822
823 <li class="add-highlight"><a href="#">Add Highlight</a></li>
824 <li class="add-note"><a href="#">
825 Add Note
826 </a></li>
827
828 </ul>
829</div><br>**********************************<b>PAGE: 11</b>***********************************\<br><p class="ChapterNumber"><a id="b9781484200568_2"></a>CHAPTER 2</p>
830<p class="Chapimage"><img src="/library/view/python-algorithms-mastering/9781484200551/images/frontdot.jpg" alt="image" width="82" height="25" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/frontdot.jpg"></p>
831<p class="ChapterTitle">The Basics</p>
832<div>
833<div class="blockquote">
834<p class="block-noindent"><i>Tracey: I didn’t know you were out there.</i></p>
835<p class="noindent"><i>Zoe: Sort of the point. Stealth—you may have heard of it.</i></p>
836<p class="noindent"><i>Tracey: I don’t think they covered that in basic.</i></p>
837</div>
838<p class="noindent5s">— From “The Message,†episode 14 of <i>Firefly</i></p>
839<p class="noindent">Before moving on to the mathematical techniques, algorithmic design principles, and classical algorithms that make up the bulk of this book, we need to go through some basic principles and techniques. When you start reading the following chapters, you should be clear on the meaning of phrases such as “directed, weighted graph without negative cycles†and “a running time of Θ(<i>n</i> lg <i>n</i>).†You should also have an idea of how to implement some fundamental structures in Python.</p>
840<p class="indent">Luckily, these basic ideas aren’t at all hard to grasp. The main two topics of the chapter are asymptotic notation, which lets you focus on the essence of running times, and ways of representing trees and graphs in Python. There is also practical advice on timing your programs and avoiding some basic traps. First, though, let’s take a look at the abstract machines we algorists tend to use when describing the behavior of our algorithms.</p>
841<p id="Sec1" class="Heading1">Some Core Ideas in Computing</p>
842<p class="noindent">In the mid-1930s the English mathematician Alan Turing published a paper called “On computable numbers, with an application to the Entscheidungsproblemâ€<a id="_Fn1" href="#Fn1" class="totri-footnote"><sup>1</sup></a> and, in many ways, laid the groundwork for modern computer science. His abstract <i>Turing machine</i> has become a central concept in the theory of computation, in great part because it is intuitively easy to grasp. A Turing machine is a simple abstract device that can read from, write to, and move along an infinitely long strip of paper. The actual behavior of the machines varies. Each is a so-called finite state machine: It has a finite set of states (some of which indicate that it has finished), and every symbol it reads potentially triggers reading and/or writing and switching to a different state. You can think of this machinery as a set of <i>rules</i>. (“If I am in state 4 and see an <i>X</i>, I move one step to the left, write a <i>Y</i>, and switch to state 9.â€) Although these machines may seem simple, they can, surprisingly enough, be used to implement any form of computation anyone has been able to dream up so far, and most computer scientists believe they encapsulate the very essence of what we think of as computing.</p>
843<p class="indent">An <i>algorithm</i> <a id="cXXX.180"></a>is a procedure, consisting of a finite set of steps, possibly including loops and conditionals, that solves a given problem. A Turing machine<a id="cXXX.181"></a> is a formal description of exactly what problem an algorithm solves,<a id="_Fn2" href="#Fn2" class="totri-footnote"><sup>2</sup></a> and the formalism is often used when discussing which problems can be solved (either at all or in reasonable time, as discussed later in this chapter and in <a href="9781484200568_Ch11.xhtml">Chapter 11</a>). For more fine-grained analysis of algorithmic efficiency, however, Turing machines are not usually the first choice. Instead of scrolling along a paper tape, we use a big chunk of memory that can be accessed <i>directly</i>. The resulting machine is commonly known as the <i>random-access machine</i>.<a id="cXXX.182"></a></p>
844<p class="indent">While the formalities of the random-access machine can get a bit complicated, we just need to know something about the limits of its capabilities so we don’t cheat in our algorithm analyses. The machine is an abstract, simplified version of a standard, single-processor computer, with the following properties:</p>
845<ul class="bulleted">
846<li>We don’t have access to any form of concurrent execution; the machine simply executes one instruction after the other.</li>
847<li>Standard, basic operations such as arithmetic, comparisons, and memory access all take constant (although possibly different) amounts of time. There are no more complicated basic operations such as sorting.</li>
848<li>One computer word (the size of a value that we can work with in constant time) is not unlimited but is big enough to address all the memory locations used to represent our problem, plus an extra percentage for our variables.</li>
849</ul>
850<p class="indent">In some cases, we may need to be more specific, but this machine sketch should do for the moment.</p>
851<p class="indent">We now have a bit of an intuition for what algorithms are, as well as the abstract hardware we’ll be running them on. The last piece of the puzzle is the notion of a <i>problem</i>. For our purposes, a problem is a relation between input and output. This is, in fact, much more precise than it might sound: A <i>relation,</i> in the mathematical sense, <a id="cXXX.183"></a>is a set of pairs—in our case, which outputs are acceptable for which inputs—and by specifying this relation, we’ve got our problem nailed down. For example, the problem of sorting may be specified as a relation between two sets, A and B, each consisting of sequences.<a id="_Fn3" href="#Fn3" class="totri-footnote"><sup>3</sup></a> Without describing how to <i>perform</i> the sorting (that would be the algorithm), we can specify which output sequences (elements of B) that would be acceptable, given an input sequence (an element of A). We would require that the result sequence consisted of the same elements as the input sequence and that the elements of the result sequence were in increasing order (each bigger than or equal to the previous). The elements of A here—that is, the inputs—are called <i>problem instances</i>; <a id="cXXX.184"></a>the relation itself is the actual problem.</p>
852<p class="indent">To get our machine to work with a problem, we need to <i>encode</i> the input as zeros and ones. We won’t worry too much about the details here, but the idea is important, because the notion of running time complexity (as described in the next section) is based on knowing how <i>big</i> a problem instance is, and that size is simply the amount of memory needed to encode it. As you’ll see, the exact nature of this encoding usually won’t matter.</p>
853<p id="Sec2" class="Heading1">Asymptotic Notation</p>
854<p class="noindent">Remember the <span class="FontName1">append</span> versus <span class="FontName1">insert</span> example in <a href="9781484200568_Ch01.xhtml">Chapter 1</a>? Somehow, adding items to the end of a list scaled better with the list size than inserting them at the front; see the nearby “Black Box†sidebar on <span class="FontName1">list</span> for an explanation. These built-in operations are both written in C, but assume for a minute that you reimplement <span class="FontName1">list.append</span> in pure Python; let’s say arbitrarily that the new version is 50 times slower than the original. Let’s also say you run your slow, pure-Python <span class="FontName1">append</span>-based version on a really slow machine, while the fast, optimized, insert-based version is run on a computer that is <i>1,000 times</i> faster. Now the speed advantage of the insert version is a factor of 50,000. You compare the two implementations by inserting 100,000 numbers. What do you think happens?</p>
855<p class="indent">Intuitively, it might seem obvious that the speedy solution should win, but its “speediness†is just a constant factor, and its running time grows faster than the “slower†one. For the example at hand, the Python-coded version running on the slower machine will, actually, finish in half the time of the other one. Let’s increase the problem size a bit, to 10 million numbers, for example. Now the Python version on the slow machine will be <i>2,000 times faster</i> than the C version on the fast machine. That’s like the difference between running for about a minute and running almost a day and a half!</p>
856<p class="indent">This distinction between <i>constant factors</i> (related to such things as general programming language performance and hardware speed, for example) and the <i>growth</i> of the running time, as problem sizes increase, is of vital importance in the study of algorithms. Our focus is on the big picture—the implementation-independent properties of a given way of solving a problem. We want to get rid of distracting details and get down to the core differences, but in order to do so, we need some formalism.</p>
857<div class="Singlethin">
858<p class="Heading4a"><b>BLACK BOX: LIST</b><a id="cXXX.185"></a></p>
859<p class="box-left">Python lists <a id="cXXX.186"></a>aren’t really lists in the traditional computer science sense of the word, and that explains the puzzle of why <span class="FontName1">append</span> is so much more efficient than <span class="FontName1">insert</span>. A classical list—a so-called linked list—is implemented as a series of <i>nodes</i>, each (except for the last) keeping a reference to the next. A simple implementation might look something like this:<a id="cXXX.187"></a></p>
860<p class="pre"><span class="FontName1">class Node:</span><br> <span class="FontName1">def __init__(self, value, next=None):</span><br> <span class="FontName1">self.value = value</span><br> <span class="FontName1">self.next = next</span></p>
861<p class="box-left1">You construct a list by specifying all the nodes:</p>
862<p class="pre"><span class="FontName1">>>> L = Node("a", Node("b", Node("c", Node("d"))))</span><br><span class="FontName1">>>> L.next.next.value</span><br><span class="FontName1">'c'</span></p>
863<p class="box-left1">This is a so-called singly linked list; each node in a doubly linked list would also keep a reference to the previous node.</p>
864<p class="box-left1">The underlying implementation of Python’s <span class="FontName1">list</span> type is a bit different. Instead of several separate nodes referencing each other, a <span class="FontName1">list</span> is basically a single, contiguous slab of memory—what is usually known as an <i>array</i>. This leads to some important differences from linked lists. For example, while iterating over the contents of the list is equally efficient for both kinds (except for some overhead in the linked list), directly accessing an element at a given index is much more efficient in an array. This is because the position of the element can be calculated, and the right memory location can be accessed directly. In a linked list, however, one would have to traverse the list from the beginning.</p>
865<p class="box-left1">The difference we’ve been bumping up against, though, has to do with insertion. In a linked list, once you know where you want to insert something, insertion is cheap; it takes roughly the same amount of time, no matter how many elements the list contains. That’s not the case with arrays: An insertion would have to move all elements that are to the right of the insertion point, possibly even moving <i>all</i> the elements to a larger array, if needed. A specific solution for <i>appending</i> is to use what’s often called <a id="cXXX.188"></a>a <i>dynamic</i> array, or <i>vector</i>.<a id="_Fn4" href="#Fn4" class="totri-footnote"><sup>4</sup></a> The idea is to allocate an array that is too big and then to reallocate it in linear time whenever it overflows. It might seem that this makes the append just as bad as the insert. In both cases, we risk having to move a large number of elements. The main difference is that it happens less often with the append. In fact, if we can ensure that we always move to an array that is bigger than the last by a fixed percentage (say 20 percent or even 100 percent), the <i>average</i> cost, amortized over many appends, is constant.</p></div>
866<p id="Sec3" class="Heading2">It’s Greek to Me!</p>
867<p class="noindent">Asymptotic notation has been in use (with some variations) since the late 19th century and is an essential tool in analyzing algorithms and data structures. The core idea is to represent the resource we’re analyzing (usually time but sometimes also memory) as a function, with the input size as its parameter. For example, we could have a program with a running time of <i>T</i>(<i>n</i>) = 2.4<i>n</i> + 7.</p>
868<p class="indent">An important question arises immediately: What are the units here? It might seem trivial whether we measure the running time in seconds or milliseconds or whether we use bits or megabytes to represent problem size. The somewhat surprising answer, though, is that not only is it trivial, but it actually will not affect our results at all. We could measure time in Jovian years and problem size in kilograms (presumably the mass of the storage medium used), and it <i>will not matter</i>. This is because our original intention of ignoring implementation details carries over to these factors as well: The asymptotic notation ignores them all! (We do normally assume that the problem size is a positive integer, though.)</p>
869<p class="indent">What we often end up doing is letting the running time be the number of times a certain basic operation is performed, while problem size is either the number of items handled (such as the number of integers to be sorted, for example) or, in some cases, the number of bits needed to encode the problem instance in some reasonable encoding.</p>
870<p class="img"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFig02-01.jpg" alt="9781484200568_unFig02-01.jpg" width="937" height="268" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_unFig02-01.jpg"></p>
871<p class="noindent"><b><i>Forgetting.</i></b> <i>Of course, the assert doesn’t work. (</i><i><span class="FontName1"><a href="http://xkcd.com/379">http://xkcd.com/379</a></span></i><i>)</i></p>
872<div class="notepara">
873<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> Exactly how you encode your problems and solutions as bit patterns usually has little effect on the asymptotic running time, as long as you are reasonable. For example, avoid representing your numbers in the unary number system (1=1, 2=11, 3=111…).</p></div>
874<p class="indent">The asymptotic notation consists of a bunch of operators, written as Greek letters. <a id="cXXX.189"></a>The most important ones, and the only ones we’ll be using, are <i>O</i> (originally an omicron but now usually called “Big Ohâ€), Ω (omega), and Θ (theta). The definition for the <i>O</i> operator can be used as a foundation for the other two. The expression <i>O</i>(<i>g</i>), for some function <i>g</i>(<i>n</i>), represents a <i>set</i> of functions, and a function <i>f</i>(<i>n</i>) is in this set if it satisfies the following condition: There exists a natural number <i>n</i><sub>0</sub> and a positive constant <i>c</i> such that</p>
875<p class="indentt"><i>f</i>(<i>n</i>) ≤ <i>cg</i>(<i>n</i>)</p>
876<p class="noindent">for all <i>n</i> ≥ <i>n</i><sub>0</sub>. In other words, if we’re allowed to tweak the constant <i>c</i> (for example, by running the algorithms on machines of different speeds), the function <i>g</i> will eventually (that is, at <i>n</i><sub>0</sub>) grow bigger than <i>f</i>. See <a href="#Fig1" id="_Fig1">Figure 2-1</a> for an example.</p>
877<div class="Figure" id="Fig1">
878<p class="img-c"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-01.jpg" alt="9781484200568_Fig02-01.jpg" width="390" height="263" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-01.jpg"></p>
879<p class="FigCapt"><span class="CaptNr"><a href="#_Fig1">Figure 2-1</a>.</span> For values of n greater than n<sub>0</sub>, T(n) is less than cn<sup>2</sup>, so T(n) is O(n<sup>2</sup>)</p></div>
880<p class="indent">This is a fairly straightforward and understandable definition, although it may seem a bit foreign at first. Basically, <i>O</i>(<i>g</i>) is the set of functions that <i>do not grow faster than g</i>. For example, the function <i>n</i><sup>2</sup> is in the set <i>O</i>(<i>n</i><sup>2</sup>), or, in set notation, <i>n</i><sup>2</sup> ∈ <i>O</i>(<i>n</i><sup>2</sup>). We often simply say that <i>n</i><sup>2</sup> is <i>O</i>(<i>n</i><sup>2</sup>).</p>
881<p class="indent">The fact that <i>n</i><sup>2</sup> does not grow faster than itself is not particularly interesting. More useful, perhaps, is the fact that neither 2.4<i>n</i><sup>2</sup> + 7 nor the linear function <i>n</i> does. That is, we have both</p>
882<p class="indent">2.4<i>n</i><sup>2</sup> + 7 ∈ <i>O</i>(<i>n</i><sup>2</sup>)</p>
883<p class="noindent">and</p>
884<p class="indent"><i>n</i> ∈ <i>O</i>(<i>n</i><sup>2</sup>).</p>
885<p class="indentt">The first example shows us that we are now able to represent a function without all its bells and whistles; we can drop the 2.4 and the 7 and simply express the function as <i>O</i>(<i>n</i><sup>2</sup>), which gives us just the information we need. The second shows us that <i>O</i> can be used to express loose limits as well: Any function that is better (that is, doesn’t grow faster) than <i>g</i> can be found in <i>O</i>(<i>g</i>).</p>
886<p class="indent">How does this relate to our original example? Well, the thing is, even though we can’t be sure of the details (after all, they depend on both the Python version and the hardware you’re using), we can describe the operations asymptotically: The running time of appending <i>n</i> numbers to a Python list is <i>O</i>(<i>n</i>), while inserting <i>n</i> numbers at its beginning is <i>O</i>(<i>n</i><sup>2</sup>).</p>
887<p class="indent">The other two, Ω and Θ, are just variations of <i>O</i>. Ω is its complete opposite: A function <i>f</i> is in Ω(<i>g</i>) if it satisfies the following condition: There exists a natural number <i>n</i><sub>0</sub> and a positive constant <i>c</i> such that</p>
888<p class="indent"><i>f</i>(<i>n</i>) ≥ <i>cg</i>(<i>n</i>)</p>
889<p class="noindent">for all <i>n</i> ≥ <i>n</i><sub>0</sub>. So, where <i>O</i> forms a so-called asymptotic upper bound, Ω forms an asymptotic <i>lower</i> bound.</p>
890<div class="notepara">
891<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> Our first two asymptotic operators, <i>O</i> and Ω, are each other’s inverses: If <i>f</i> is <i>O</i>(<i>g</i>), then <i>g</i> is Ω(<i>f</i>). Exercise 2-3 asks you to show this.</p></div>
892<p class="indent">The sets formed by Θ are simply intersections of the other two, that is, Θ(<i>g</i>) = <i>O</i>(<i>g</i>) ∩ Ω(<i>g</i>). In other words, a function <i>f</i> is in Θ(<i>g</i>) if it satisfies the following condition: There exists a natural number <i>n</i><sub>0</sub> and <i>two</i> positive constants <i>c</i><sub>1</sub> and <i>c</i><sub>2</sub> such that</p>
893<p class="indent"><i>c</i><sub>1</sub><i>g</i>(<i>n</i>) ≤ <i>f</i>(<i>n</i>) ≤ <i>c</i><sub>2</sub><i>g</i>(<i>n</i>)</p>
894<p class="noindent">for all <i>n</i> ≥ <i>n</i><sub>0</sub>. This means that <i>f</i> and <i>g</i> have <i>the same asymptotic growth</i>. For example, 3<i>n</i><sup>2</sup> + 2 is Θ(<i>n</i><sup>2</sup>), but we could just as well write that <i>n</i><sup>2</sup> is Θ(3<i>n</i><sup>2</sup> + 2). By supplying an upper bound and a lower bound at the same time, the Θ operator is the most informative of the three, and I will use it when possible.</p>
895<p id="Sec4" class="Heading2">Rules of the Road</p>
896<p class="noindent">While the definitions of the asymptotic operators can be a bit tough to use directly, they actually lead to some of the simplest math ever. You can drop all multiplicative and additive constants, as well as all other “small parts†of your function, which simplifies things a lot.</p>
897<p class="indent">As a first step in juggling these asymptotic expressions, let’s take a look at some typical asymptotic classes, or <i>orders</i>. <a href="#Tab1" id="_Tab1">Table 2-1</a> lists some of these, along with their names and some typical algorithms with these asymptotic running times, also sometimes called running-time <i>complexities</i>. (If your math is a little rusty, you could take a look at the sidebar “A Quick Math Refresher†<a id="cXXX.190"></a>later in the chapter.) An important feature of this table is that the complexities have been ordered so that each row <i>dominates</i> the previous one: If <i>f</i> is found higher in the table than <i>g</i>, then <i>f</i> is <i>O</i>(<i>g</i>).<a id="_Fn5" href="#Fn5" class="totri-footnote"><sup>5</sup></a></p>
898<div class="Table" id="Tab1">
899<p class="TabCapt"><span class="CaptNr"><a href="#_Tab1">Table 2-1</a>.</span> Common Examples of Asymptotic Running Times</p>
900<table border="0">
901<thead>
902<tr class="header">
903<th valign="top"><p class="tab-left">Complexity</p></th>
904<th valign="top"><p class="tab-left">Name</p></th>
905<th valign="top"><p class="tab-left">Examples, Comments</p></th></tr></thead>
906<tbody>
907<tr class="noclass">
908<td valign="top"><p class="tab-leftt">Θ(1)</p></td>
909<td valign="top"><p class="tab-leftt">Constant</p></td>
910<td valign="top"><p class="tab-leftt">Hash table lookup and modification (see “Black Box†sidebar on <span class="FontName1">dict</span>).</p></td></tr>
911<tr class="noclass">
912<td valign="top"><p class="tab-leftt">Θ(lg <i>n</i>)</p></td>
913<td valign="top"><p class="tab-leftt">Logarithmic</p></td>
914<td valign="top"><p class="tab-leftt">Binary search (see <a href="9781484200568_Ch06.xhtml">Chapter 6</a>). Logarithm base unimportant.<a id="_Fn7" href="#Fn7" class="totri-footnote"><sup>7</sup></a></p></td></tr>
915<tr class="noclass">
916<td valign="top"><p class="tab-leftt">Θ(<i>n</i>)</p></td>
917<td valign="top"><p class="tab-leftt">Linear</p></td>
918<td valign="top"><p class="tab-leftt">Iterating over a list.</p></td></tr>
919<tr class="noclass">
920<td valign="top"><p class="tab-leftt">Θ(<i>n</i> lg <i>n</i>)</p></td>
921<td valign="top"><p class="tab-leftt">Loglinear</p></td>
922<td valign="top"><p class="tab-leftt">Optimal sorting of arbitrary values (see <a href="9781484200568_Ch06.xhtml">Chapter 6</a>). Same as Θ(lg <i>n</i>!).</p></td></tr>
923<tr class="noclass">
924<td valign="top"><p class="tab-leftt">Θ(<i>n</i><sup>2</sup>)</p></td>
925<td valign="top"><p class="tab-leftt">Quadratic</p></td>
926<td valign="top"><p class="tab-leftt">Comparing <i>n</i> objects to each other (see <a href="9781484200568_Ch03.xhtml">Chapter 3</a>).</p></td></tr>
927<tr class="noclass">
928<td valign="top"><p class="tab-leftt">Θ(<i>n</i><sup>3</sup>)</p></td>
929<td valign="top"><p class="tab-leftt">Cubic</p></td>
930<td valign="top"><p class="tab-leftt">Floyd and Warshall’s algorithms (see <a href="9781484200568_Ch08.xhtml">Chapters 8</a> and <a href="9781484200568_Ch09.xhtml" class="totri-footnote">9</a>).</p></td></tr>
931<tr class="noclass">
932<td valign="top"><p class="tab-leftt"><i>O</i>(<i>nk</i>)</p></td>
933<td valign="top"><p class="tab-leftt">Polynomial</p></td>
934<td valign="top"><p class="tab-leftt"><i>k</i> nested for loops over <i>n</i> (if <i>k</i> is a positive integer). For any constant <i>k</i> > 0.</p></td></tr>
935<tr class="noclass">
936<td valign="top"><p class="tab-leftt">Ω(<i>kn</i>)</p></td>
937<td valign="top"><p class="tab-leftt">Exponential</p></td>
938<td valign="top"><p class="tab-leftt">Producing every subset of <i>n</i> items (<i>k</i> = 2; see <a href="9781484200568_Ch03.xhtml">Chapter 3</a>). Any <i>k</i> > 1.</p></td></tr>
939<tr class="noclass">
940<td valign="top"><p class="tab-leftt">Θ(<i>n</i>!)</p></td>
941<td valign="top"><p class="tab-leftt">Factorial</p></td>
942<td valign="top"><p class="tab-leftt">Producing every ordering of <i>n</i> values.</p></td></tr></tbody></table></div>
943<div class="notepara">
944<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> Actually, the relationship is even stricter: <i>f</i> is <i>o</i>(<i>g</i>), where the “Little Oh†is a stricter version if “Big Oh.†Intuitively, instead of “doesn’t grow faster than,†it means “grows slower than.†Formally, it states that <i>f</i>(<i>n</i>)/<i>g</i>(<i>n</i>) converges to zero as <i>n</i> grows to infinity. You don’t really need to worry about this, though.</p></div>
945<p class="indent"><i>Any</i> polynomial (that is, with any power <i>k</i> > 0, even a fractional one) dominates <i>any</i> logarithm (that is, with any base), and <i>any</i> exponential (with any base <i>k</i> > 1) dominates <i>any</i> polynomial (see Exercises 2-5 and 2-6). Actually, all logarithms are asymptotically equivalent—they differ only by constant factors (see Exercise 2-4). Polynomials and exponentials, however, have different asymptotic growth depending on their exponents or bases, respectively. So, <i>n</i><sup>5</sup> grows faster than <i>n</i><sup>4</sup>, and 5<i>n</i> grows faster than 4<i>n</i>.</p>
946<p class="indent">The table primarily uses Θ notation, but the terms <i>polynomial</i> and <i>exponential</i> are a bit special, because of the role they play in separating <i>tractable</i> (“solvableâ€) problems from <i>intractable</i> (“unsolvableâ€) ones, as discussed in <a href="9781484200568_Ch11.xhtml">Chapter 11</a>. Basically, an algorithm with a polynomial running time is considered feasible, while an exponential one is generally useless. Although this isn’t entirely true in practice, (Θ(<i>n</i><sup>100</sup>) is no more practically useful than Θ(2<i>n</i>)); it is, in many cases, a useful distinction.<a id="_Fn6" href="#Fn6" class="totri-footnote"><sup>6</sup></a> Because of this division, any running time in <i>O</i>(<i>nk</i>), for any <i>k</i> > 0, is called polynomial, even though the limit may not be tight. For example, even though binary search (explained in the “Black Box†sidebar on <span class="FontName1">bisect</span> in <a href="9781484200568_Ch06.xhtml">Chapter 6</a>) has a running time of Θ(lg <i>n</i>), it is still said to be a polynomial-time (or just polynomial) algorithm. Conversely, any running time in Ω(<i>kn</i>)—even one that is, say, Θ(<i>n</i>!)—is said to be exponential.<a id="cXXX.191"></a></p>
947<p class="indent">Now that we have an overview of some important orders of growth, we can formulate two simple rules:</p>
948<ul class="bulleted">
949<li>In a sum, only the dominating summand matters.
950<p class="noindent">For example, Θ(<i>n</i><sup>2</sup> + <i>n</i><sup>3</sup> + 42) = Θ(<i>n</i><sup>3</sup>).</p></li>
951<li>In a product, constant factors don’t matter.
952<p class="noindent">For example, Θ(4.2<i>n</i> lg <i>n</i>) = Θ(<i>n</i> lg <i>n</i>).</p></li>
953</ul>
954<p class="indent">In general, we try to keep the asymptotic expressions <a id="cXXX.192"></a>as simple as possible, eliminating as many unnecessary parts as we can. For <i>O</i> and Ω, there is a third principle we usually follow:</p>
955<ul class="bulleted">
956<li>Keep your upper or lower limits tight.
957<p class="noindent">In other words, we try to make the upper limits low and the lower limits high. For example, although <i>n</i><sup>2</sup> might technically be <i>O</i>(<i>n</i><sup>3</sup>), we usually prefer the tighter limit, <i>O</i>(<i>n</i><sup>2</sup>). In most cases, though, the best thing is to simply use Θ.</p></li></ul>
958<p class="indent">A practice that can make asymptotic expressions even more useful is that of using them <i>instead of actual values</i>, in arithmetic expressions. Although this is technically incorrect (each asymptotic expression yields a set of functions, after all), it is quite common. For example, Θ(<i>n</i><sup>2</sup>) + Θ(<i>n</i><sup>3</sup>) simply means <i>f</i> + <i>g</i>, for some (unknown) functions <i>f</i> and <i>g</i>, where <i>f</i> is Θ(<i>n</i><sup>2</sup>) and <i>g</i> is Θ(<i>n</i><sup>3</sup>). Even though we cannot find the exact sum <i>f</i> + <i>g</i>, because we don’t know the exact functions, we <i>can</i> find the asymptotic expression to cover it, as illustrated by the following two “bonus rules:â€</p>
959<ul class="bulleted">
960<li>Θ(<i>f</i>) + Θ(<i>g</i>) = Θ(<i>f</i> + <i>g</i>)</li>
961<li>Θ(<i>f</i>) · Θ(<i>g</i>) = Θ(<i>f</i> · <i>g</i>)</li>
962</ul>
963<p class="indent">Exercise 2-8 asks you to show that these are correct.</p>
964<p id="Sec5" class="Heading2">Taking the Asymptotics for a Spin</p>
965<p class="noindent">Let’s take a look at some simple programs and see whether we can determine their asymptotic running times. To begin with, let’s consider programs where the (asymptotic) running time varies only with the problem size, not the specifics of the instance in question. (The next section deals with what happens if the actual contents of the instances matter to the running time.) This means, for example, that <span class="FontName1">if</span> statements <a id="cXXX.193"></a>are rather irrelevant for now. What’s important is loops, in addition to straightforward code blocks. Function calls don’t really complicate things; just calculate the complexity for the call and insert it at the right place.</p>
966<div class="notepara">
967<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> There is one situation where function calls can trip us up: when the function is recursive. This case is dealt with in <a href="9781484200568_Ch03.xhtml">Chapters 3</a> and <a href="9781484200568_Ch04.xhtml" class="totri-footnote">4</a>.</p></div>
968<p class="indent">The loop-free case is simple: we are executing one statement before another, so their complexities are added. Let’s say, for example, that we know that for a list of size <i>n</i>, a call to append is Θ(1), while a call to insert at position 0 is Θ(<i>n</i>). Consider the following little two-line program fragment, where <span class="FontName1">nums</span> is a list of size <i>n</i>:</p>
969<pre><span class="FontName1">nums.append(1)</span><br><span class="FontName1">nums.insert(0,2)</span></pre>
970<p class="indent">We know that the line first takes constant time. At the time we get to the second line, the list size has changed and is now <i>n</i> + 1. This means that the complexity of the second line is Θ(<i>n</i> + 1), which is the same as Θ(<i>n</i>). Thus, the total running time is the sum of the two complexities, Θ(1) + Θ(<i>n</i>) = Θ(<i>n</i>).</p>
971<p class="indent">Now, let’s consider some simple loops. Here’s a plain <span class="FontName1">for</span> loop over a sequence with <i>n</i> elements (numbers, say; for example, <span class="FontName1">seq = range(n)</span>):<a id="_Fn8" href="#Fn8" class="totri-footnote"><sup>8</sup></a></p>
972<pre><span class="FontName1">s = 0</span><br><span class="FontName1">for x in seq:</span><br> <span class="FontName1">s += x</span></pre>
973<p class="indent">This is a straightforward implementation of what the <span class="FontName1">sum</span> function <a id="cXXX.194"></a>does: It iterates over <span class="FontName1">seq</span> and adds the elements to the starting value in <span class="FontName1">s</span>. This performs a single constant-time operation (<span class="FontName1">s += x</span>) for each of the <i>n</i> elements of <span class="FontName1">seq</span>, which means that its running time is linear, or Θ(<i>n</i>). Note that the constant-time initialization (<span class="FontName1">s = 0</span>) is dominated by the loop here.</p>
974<p class="indent">The same logic applies to the “camouflaged†loops we find in list (or set or dict) comprehensions and generator expressions, for example. The following list comprehension also has a linear running-time complexity:</p>
975<pre><span class="FontName1">squares = [x**2 for x in seq]</span></pre>
976<p class="indent">Several built-in functions <a id="cXXX.195"></a>and methods also have “hidden†loops in them. This generally applies to any function or method that deals with every element of a container, such as <span class="FontName1">sum</span> or <span class="FontName1">map</span>, for example.</p>
977<p class="indent">Things get a little bit (but not a lot) trickier when we start nesting loops. Let’s say we want to sum up all possible products of the elements in <span class="FontName1">seq</span>; here’s an example:</p>
978<pre><span class="FontName1">s = 0</span><br><span class="FontName1">for x in seq:</span><br> <span class="FontName1">for y in seq:</span><br> <span class="FontName1">s += x*y</span></pre>
979<p class="indent">One thing worth noting about this implementation is that each product will be added twice. If <span class="FontName1">42</span> and <span class="FontName1">333</span> are both in <span class="FontName1">seq</span>, for example, we’ll add both <span class="FontName1">42*333</span> and <span class="FontName1">333*42</span>. That doesn’t really affect the running time; it’s just a constant factor.</p>
980<p class="indent">What’s the running time now? The basic rule is easy: The complexities of code blocks executed one after the other are just added. The complexities of nested loops are <i>multiplied</i>. The reasoning is simple: For each round of the outer loop, the inner one is executed in full. In this case, that means “linear times linear,†which is quadratic. In other words, the running time is Θ(<i>n</i>·<i>n</i>) = Θ(<i>n</i><sup>2</sup>). Actually, this multiplication rule means that for further levels of nesting, we will just increment the power (that is, the exponent). Three nested linear loops give us Θ(<i>n</i><sup>3</sup>), four give us Θ(<i>n</i><sup>4</sup>), and so forth.</p>
981<p class="indent">The sequential and nested cases <a id="cXXX.196"></a>can be mixed, of course. Consider the following slight extension:</p>
982<pre><span class="FontName1">s = 0</span><br><span class="FontName1">for x in seq:</span><br> <span class="FontName1">for y in seq:</span><br> <span class="FontName1">s += x*y</span><br> <span class="FontName1">for z in seq:</span><br> <span class="FontName1">for w in seq:</span><br> <span class="FontName1">s += x-w</span></pre>
983<p class="indent">It may not be entirely clear what we’re computing here (I certainly have no idea), but we should still be able to find the running time, using our rules. The <span class="FontName1">z</span>-loop is run for a linear number of iterations, and it contains a linear loop, so the total complexity there is quadratic, or Θ(<i>n</i><sup>2</sup>). The <span class="FontName1">y</span>-loop is clearly Θ(<i>n</i>). This means that the code block inside the x-loop is Θ(<i>n</i> + <i>n</i><sup>2</sup>). This entire block is executed for each round of the <span class="FontName1">x</span>-loop, which is run <i>n</i> times. We use our multiplication rule and get Θ(<i>n</i>(<i>n</i> + <i>n</i><sup>2</sup>)) = Θ(<i>n</i><sup>2</sup> + <i>n</i><sup>3</sup>) = Θ(<i>n</i><sup>3</sup>), that is, cubic. We could arrive at this conclusion even more easily by noting that the <span class="FontName1">y</span>-loop is dominated by the <span class="FontName1">z</span>-loop and can be ignored, giving the inner block a quadratic running time. “Quadratic times linear†gives us cubic.</p>
984<p class="indent">The loops need not all be repeated Θ(<i>n</i>) times, of course. Let’s say we have two sequences, <span class="FontName1">seq1</span> and <span class="FontName1">seq2</span>, where <span class="FontName1">seq1</span> contains <i>n</i> elements and <span class="FontName1">seq2</span> contains <i>m</i> elements. The following code will then have a running time of Θ(<i>nm</i>).</p>
985<pre><span class="FontName1">s = 0</span><br><span class="FontName1">for x in seq1:</span><br> <span class="FontName1">for y in seq2:</span><br> <span class="FontName1">s += x*y</span></pre>
986<p class="indent">In fact, the inner loop need not even be executed the <i>same number of times</i> for each iteration of the outer loop. This is where things can get a bit fiddly. Instead of just multiplying two iteration counts, <a id="cXXX.197"></a>such as <i>n</i> and <i>m</i> in the previous example, we now have to <i>sum</i> the iteration counts of the inner loop. What that means should be clear in the following example:</p>
987<pre><span class="FontName1">seq1 = [[0, 1], [2], [3, 4, 5]]</span><br><span class="FontName1">s = 0</span><br><span class="FontName1">for seq2 in seq1:</span><br> <span class="FontName1">for x in seq2:</span><br> <span class="FontName1">s += x</span></pre>
988<p class="indent">The statement <span class="FontName1">s += x</span> is now performed 2 + 1 + 3 = 6 times. The length of <span class="FontName1">seq2</span> gives us the running time of the inner loop, but because it varies, we cannot simply multiply it by the iteration count of the outer loop. A more realistic example is the following, which revisits our original example—multiplying every combination of elements from a sequence:</p>
989<pre><span class="FontName1">s = 0</span><br><span class="FontName1">n = len(seq)</span><br><span class="FontName1">for i in range(n-1):</span><br> <span class="FontName1">for j in range(i+1, n):</span><br> <span class="FontName1">s += seq[i] * seq[j]</span></pre>
990<p class="indent">To avoid multiplying objects <a id="cXXX.198"></a>with themselves or adding the same product twice, the outer loop now avoids the last item, and the inner loop iterates over the items only <i>after</i> the one currently considered by the outer one. This is actually a lot less confusing than it might seem, but finding the complexity here requires a little bit more care. This is one of the important cases of counting that is covered in the next chapter.<a id="_Fn9" href="#Fn9" class="totri-footnote"><sup>9</sup></a></p>
991<p id="Sec6" class="Heading2">Three Important Cases</p>
992<p class="noindent">Until now, we have assumed that the running time is completely deterministic and dependent only on input size, not on the actual contents of the input. That is not particularly realistic, however. For example, if you were to construct a sorting algorithm, you might start like this:</p>
993<pre><span class="FontName1">def sort_w_check(seq):</span><br> <span class="FontName1">n = len(seq)</span><br> <span class="FontName1">for i in range(n-1):</span><br> <span class="FontName1">if seq[i] > seq[i+1]:</span><br> <span class="FontName1">break</span><br> <span class="FontName1">else:</span><br> <span class="FontName1">return</span><br> <span class="FontName1">...</span></pre>
994<p class="indent">A check is performed before getting into the actual sorting: If the sequence is already sorted, the function simply returns.</p>
995<div class="notepara">
996<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> The optional <span class="FontName1">else</span> clause on a loop in Python is executed if the loop has not been ended prematurely by a <span class="FontName1">break</span> statement.</p></div>
997<p class="indent">This means that no matter how inefficient our main sorting is, the running time will always be linear if the sequence is already sorted. No sorting algorithm can achieve linear running time in general, meaning that this “best-case scenario†is an anomaly—and all of a sudden, we can’t reliably predict the running time anymore. The solution to this quandary is to be more specific. Instead of talking about a problem in general, we can specify the input more narrowly, and we often talk about one of three important cases:</p>
998<ul class="bulleted">
999<li><b>The best case.</b> <a id="cXXX.199"></a>This is the running time you get when the input is optimally suited to your algorithm. For example, if the input sequence to <span class="FontName1">sort_w_check</span> were sorted, we would get the best-case running time, which would be linear.</li>
1000<li><b>The worst case.</b> <a id="cXXX.200"></a>This is usually the most useful case—the worst possible running time. This is useful because we normally want to be able to give some guarantees about the efficiency of our algorithm, and this is the best guarantee we can give in general.</li>
1001<li><b>The average case.</b> <a id="cXXX.201"></a>This is a tricky one, and I’ll avoid it most of the time, but in some cases it can be useful. Simply put, it’s the expected value of the running time, for random input, with a given probability distribution.</li>
1002</ul>
1003<p class="indent">In many of the algorithms we’ll be working with, these three cases have the same complexity. When they don’t, we’ll often be working with the worst case. Unless this is stated explicitly, however, no assumptions can be made about which case is being studied. In fact, we may not be restricting ourselves to a single kind of input <i>at all</i>. What if, for example, we wanted to describe the running time <a id="cXXX.202"></a>of <span class="FontName1">sort_w_check</span> <i>in general</i>? This is still possible, but we can’t be quite as precise.</p>
1004<p class="indent">Let’s say the main sorting algorithm we’re using after the check is loglinear; that is, it has a running time of Θ(<i>n</i> lg <i>n</i>)). This is typical and, in fact, optimal in the general case for sorting algorithms. The <i>best-case</i> running time of our algorithm is then Θ(<i>n</i>), when the check uncovers a sorted sequence, and the <i>worst-case</i> running time is Θ(<i>n</i> lg <i>n</i>). If we want to give a description of the running time in general, however—for any kind of input—we cannot use the Θ notation at all. There is no single function describing the running time; different types of inputs have different running time functions, and these have different asymptotic complexity, meaning we can’t sum them up in a single Θ expression.</p>
1005<p class="indent">The solution? Instead of the “twin bounds†of Θ, we supply only an upper or lower limit, using <i>O</i> or Ω. We can, for example, say that <span class="FontName1">sort_w_check</span> has a running time of <i>O</i>(<i>n</i> lg <i>n</i>). This covers both the best and worst cases. Similarly, we could say it has a running time of Ω(<i>n</i>). Note that these limits are as tight as we can make them.</p>
1006<div class="notepara">
1007<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> It is perfectly acceptable to use either of our asymptotic operators to describe either of the three cases discussed here. We could very well say that the worst-case running time of <span class="FontName1">sort_w_check</span> is Ω(<i>n</i> lg <i>n</i>), for example, or that the best case is <i>O</i>(<i>n</i>).</p></div>
1008<p id="Sec7" class="Heading2">Empirical Evaluation of Algorithms<a id="cXXX.02d"></a></p>
1009<p class="noindent">The main focus of this book is <i>algorithm design</i> and its close relative, <i>algorithm analysis</i>. There is, however, another important discipline of algorithmics that can be of vital importance when building real-world systems, and that is <i>algorithm engineering</i>, the art of efficiently <i>implementing</i> algorithms. In a way, algorithm design can be seen as a way of achieving low asymptotic running time by designing efficient algorithms, while algorithm engineering is focused on reducing the hidden constants in that asymptotic complexity.</p>
1010<p class="indent">Although I may offer some tips on algorithm engineering in Python here and there, it can be hard to predict exactly which tweaks and hacks will give you the best performance for the specific problems you’re working on—or, indeed, for your hardware or version of Python. These are exactly the kind of quirks asymptotics are designed to avoid. And in some cases, such tweaks and hacks may not be needed at all, because your program may be fast enough as it is. The most useful thing you can do in many cases is simply to try and see. If you have a tweak you <i>think</i> will improve your program, try it! Implement the tweak, and run some experiments. Is there an improvement? And if the tweak makes your code less readable and the improvement is small, is it really worth it?</p>
1011<div class="notepara">
1012<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> This section is about evaluating your programs, not on the engineering itself. For some hints on speeding up Python programs, see <a href="9781484200568_AppA.xhtml">Appendix A</a>.</p></div>
1013<p class="indent">While there are theoretical aspects of so-called experimental algorithmics—that is, experimentally evaluating algorithms and their implementations—that are beyond the scope of this book, I’ll give you some practical starting tips that should get you pretty far.</p>
1014<div class="notepara">
1015<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 1</b> If possible, don’t worry about it.</p></div>
1016<p class="indent">Worrying about asymptotic complexity can be important. Sometimes, it’s the difference between a <i>solution</i> and what is, in practice, a <i>non</i>solution. <a id="cXXX.203"></a>Constant factors in the running time, however, are often not all that critical. Try a straightforward implementation of your algorithm first and see whether that’s good enough. Actually, you might even try a naïve algorithm first; to quote programming guru Ken Thompson, “When in doubt, use brute force.†Brute force, in algorithmics, generally refers to a straightforward approach that just tries every possible solution, running time be damned! If it works, it works.</p>
1017<div class="notepara">
1018<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 2</b> For timing things, use <span class="FontName1">timeit</span>.</p></div>
1019<p class="indent">The <span class="FontName1">timeit</span> module <a id="cXXX.204"></a>is designed to perform relatively reliable timings. Although getting truly trustworthy results, such as those you’d publish in a scientific paper, is a lot of work, <span class="FontName1">timeit</span> can help you get “good enough in practice†timings easily. Here’s an example:</p>
1020<pre><span class="FontName1">>>> import timeit</span><br><span class="FontName1">>>> timeit.timeit("x = 2 + 2")</span><br><span class="FontName1">0.034976959228515625</span><br><span class="FontName1">>>> timeit.timeit("x = sum(range(10))")</span><br><span class="FontName1">0.92387008666992188</span></pre>
1021<p class="indent">The actual timing values you get will quite certainly not be exactly like mine. If you want to time a function (which could, for example, be a test function wrapping parts of your code), it may be even easier to use <span class="FontName1">timeit</span> from the shell command line, using the <span class="FontName1">-m</span> switch:</p>
1022<pre><span class="FontName1">$ python<a id="cXXX.02g"></a> -m timeit -s"import mymodule as m" "m.myfunction()"</span></pre>
1023<p class="indent">There is one thing you should be careful about when using <span class="FontName1">timeit</span>. Avoid side effects that will affect repeated execution. The <span class="FontName1">timeit</span> function will run your code multiple times for increased precision, and if earlier executions affect later runs, you are probably in trouble. For example, if you time something like <span class="FontName1">mylist.sort()</span>,<a id="cXXX.02e"></a> <a id="cXXX.205"></a>the list would get sorted only the <i>first</i> time. The other thousands of times the statement is run, the list will already be sorted, making your timings unrealistically low. The same caution would apply to anything involving generators or iterators that could be exhausted, for example. You can find more details on this module and how it works in the standard library documentation.<a id="_Fn10" href="#Fn10"><sup>10</sup></a></p>
1024<div class="notepara">
1025<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 3</b> To find bottlenecks, use a profiler.</p></div>
1026<p class="indent">It is a common practice to guess which part of your program needs optimization. Such guesses are quite often wrong. Instead of guessing wildly, let a profiler find out for you! Python comes with a few profiler variants, but the recommended one is cProfile. It’s as easy to use as <span class="FontName1">timeit</span> but gives more detailed information about where the execution time is spent. If your main function is <span class="FontName1">main</span>, you can use the profiler to run your program as follows:</p>
1027<pre><span class="FontName1">import cProfile</span><br><span class="FontName1">cProfile.run('main()')</span></pre>
1028<p class="indent">This should print out timing results about the various functions in your program. If the cProfile module isn’t available on your system, use <span class="FontName1">profile</span> instead. Again, more information is available in the library reference. If you’re not so interested in the details of your <i>implementation</i> but just want to empirically examine the behavior of your <i>algorithm</i> on a given problem instance, <a id="cXXX.206"></a>the <span class="FontName1">trace</span> module in the standard library can be useful—it can be used to count the number of times each statement is executed. You could even visualize the calls of your code using a tool such as <a id="cXXX.207"></a>Python Call Graph<a id="cXXX.02h"></a><a id="cXXX.208"></a>.<a id="_Fn11" href="#Fn11"><sup>11</sup></a></p>
1029<div class="notepara">
1030<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 4</b> Plot your results.</p></div>
1031<p class="indent">Visualization can be a great tool when figuring things out. Two common plots for looking at performance are <i>graphs</i>,<a id="_Fn12" href="#Fn12"><sup>12</sup></a> for example of problem size versus running time, and <i>box plots</i>, showing the distribution of running times. See 1<a href="#Fig2" id="_Fig2">Figure 2-2</a> for examples of these. A great package for plotting things with Python is <span class="FontName1">matplotlib</span> (available from <span class="FontName1"><a href="http://matplotlib.org">http://matplotlib.org</a></span>).</p>
1032<div class="Figure" id="Fig2">
1033<p class="img-c"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-02.jpg" alt="9781484200568_Fig02-02.jpg" width="821" height="317" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-02.jpg"></p>
1034<p class="FigCapt"><span class="CaptNr"><a href="#_Fig2">Figure 2-2</a>.</span> Visualizing <a id="cXXX.209"></a>running times for programs A, B, and C and problem sizes 10–50</p></div>
1035<div class="notepara">
1036<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 5</b> Be careful when drawing conclusions based on timing comparisons.</p></div>
1037<p class="indent">This tip is a bit vague, but that’s because there are so many pitfalls when drawing conclusions about which way is better, based on timing experiments. First, any differences you observe may be because of random variations. If you’re using a tool such as <span class="FontName1">timeit</span>, this is less of a risk, because it repeats the statement to be timed many times (and even runs the whole experiment multiple times, keeping the best run). Still, there will be random variations, and if the difference between two implementations isn’t greater than what can be expected from this randomness, you can’t really conclude that they’re different. (You can’t conclude that they <i>aren’t</i>, either.)</p>
1038<div class="notepara">
1039<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> If you need to draw a conclusion when it’s a close call, you can use the statistical technique of <i>hypothesis testing</i>. However, for practical purposes, if the difference is so small you’re not sure, it probably doesn’t matter which implementation you choose, so go with your favorite.</p></div>
1040<p class="indent">This problem is compounded if you’re comparing more than two implementations. The number of pairs to compare increases <i>quadratically</i> with the number of versions, as explained in <a href="9781484200568_Ch03.xhtml">Chapter 3</a>, <i>drastically</i> increasing the chance that at least two of the versions will appear freakishly different, just by chance. (This is what’s called the <i>problem of multiple comparisons</i>.) There are <a id="cXXX.210"></a>statistical solutions to this problem, but the easiest practical way around it is to repeat the experiment with the two implementations in question. Maybe even a couple of times. Do they still look different?</p>
1041<p class="indent">Second, there are issues when comparing averages. At least, you should stick to comparing averages of actual timings. A common practice to get more meaningful numbers when performing timing experiments is to normalize the running time of each program, dividing it by the running time of some standard, simple algorithm. This can indeed be useful but can in some cases make your results less than meaningful. See the paper “How not to lie with statistics: The correct way to summarize benchmark results†by Fleming and Wallace for a few pointers. For some other perspectives, you could read Bast and Weber’s “Don’t compare averages,†or the more recent paper by Citron et al., “The harmonic or geometric mean: does it really matter?â€</p>
1042<p class="indent">Third, your conclusions may not generalize. Similar experiments run on other problem instances or other hardware, for example, might yield different results. If others are to interpret or reproduce your experiments, it’s important that you <i>thoroughly document how you performed them</i>.</p>
1043<div class="notepara">
1044<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip 6</b> Be careful when drawing conclusions about asymptotics from experiments.</p></div>
1045<p class="indent">If you want to say something conclusively about the asymptotic behavior of an algorithm, you need to analyze it, as described earlier in this chapter. Experiments can give you hints, but they are by their nature finite, and asymptotics deal with what happens for arbitrarily large data sizes. On the other hand, unless you’re working in theoretical computer science, the <i>purpose</i> of asymptotic analysis is to say something about the behavior of the algorithm when implemented and run on actual problem instances, meaning that experiments <i>should</i> be relevant.</p>
1046<p class="indent">Suppose you <i>suspect</i> that an algorithm has a quadratic running time complexity, but you’re unable to conclusively prove it. Can you use experiments to support your claim? As explained, experiments (and algorithm engineering) deal mainly with constant factors, but there <i>is a way</i>. The main problem is that your hypothesis isn’t really testable through experiments. If you claim that the algorithm is, say, <i>O</i>(<i>n</i><sup>2</sup>), no data can confirm or refute this. However, if you make your hypothesis <i>more specific</i>, it becomes testable. You might, for example, based on some preliminary results, believe that the running time will never exceed 0.24<i>n</i><sup>2</sup> + 0.1<i>n</i> + 0.03 seconds in your setup. Perhaps more realistically, your hypothesis might involve the number of times a given operation is performed, which you can test with the trace module. This <i>is</i> a testable—or, more specifically, <i>refutable</i>—hypothesis. If you run lots of experiments and you aren’t able to find any counter-examples, that supports your hypothesis to some extent. The neat thing is that, indirectly, you’re also supporting the claim that the algorithm is <i>O</i>(<i>n</i><sup>2</sup>).</p>
1047<p id="Sec8" class="Heading1">Implementing Graphs and Trees</p>
1048<p class="noindent">The first example in <a href="9781484200568_Ch01.xhtml">Chapter 1</a>, where we wanted to navigate Sweden and China, was typical of problems that can expressed in one of the most powerful frameworks in algorithmics—that of <i>graphs</i>. In many cases, if you can formulate what you’re working on as a graph problem, you’re at least halfway to a solution. And if your problem instances are in some form expressible as <i>trees</i>, you stand a good chance of having a really <i>efficient</i> solution.</p>
1049<p class="indent">Graphs <a id="cXXX.211"></a>can represent all kinds of structures and systems, from transportation networks to communication networks and from protein interactions in cell nuclei to human interactions online. You can increase their expressiveness by adding extra data such as <i>weights</i> or <i>distances</i>, making it possible to represent such diverse problems as playing chess or matching a set of people to as many jobs, with the best possible use of their abilities. <a id="cXXX.212"></a>Trees are just a special kind of graphs, so most algorithms and representations for graphs will work for them as well. However, because of their special properties (they are connected and have no cycles), some specialized and quite simple versions of both the representations and algorithms are possible. There are plenty of practical structures, such as XML documents or directory hierarchies, that can be represented as trees,<a id="_Fn13" href="#Fn13"><sup>13</sup></a> so this “special case†is actually quite general.</p>
1050<p class="indent">If your memory of graph nomenclature is a bit rusty or if this is all new to you, take a look at <a href="9781484200568_AppC.xhtml">Appendix C</a>. Here are the highlights in a <a id="cXXX.213"></a>nutshell<a id="cXXX.214"></a>:</p>
1051<ul class="bulleted">
1052<li>A graph <i>G</i> = (<i>V</i>, <i>E</i>) consists of a set of <i>nodes</i>, <i>V</i>, and <i>edges</i> between them, <i>E</i>. If the edges have a direction, we say the graph is <i>directed</i>.</li>
1053<li>Nodes with an edge between them are <i>adjacent</i>. The edge is then <i>incident</i> to both. The nodes that are adjacent to <i>v</i> are the <i>neighbors</i> of <i>v</i>. The <i>degree</i> of a node is the number of edges incident to it.</li>
1054<li>A <i>subgraph</i> of <i>G</i> = (<i>V</i>, <i>E</i>) consists of a subset of <i>V</i> and a subset of <i>E</i>. A <i>path</i> in <i>G</i> is a subgraph where the edges connect the nodes in a sequence, without revisiting any node. A <i>cycle</i> is like a path, except that the last edge links the last node to the first.</li>
1055<li>If we associate a <i>weight</i> with each edge in <i>G</i>, we say that <i>G</i> is a <i>weighted graph</i>. The <i>length</i> of a path or cycle is the sum of its edge weights, or, for unweighted graphs, simply the number of edges.</li>
1056<li>A <i>forest</i> is a cycle-free graph, and a connected forest is a <i>tree</i>. In other words, a forest consists of one or more trees.</li>
1057</ul>
1058<p class="indent">While phrasing your problem in graph terminology gets you far, if you want to implement a solution, you need to represent the graphs as data structures somehow. This, in fact, applies even if you just want to design an algorithm, because you must know what the running times of different operations on your graph representation will be. In some cases, the graph will already be present in your code or data, and no separate structure will be needed. For example, if you’re writing a web crawler, automatically collecting information about web sites by following links, the graph is the Web itself. If you have a <span class="FontName1">Person</span> class with a <span class="FontName1">friends</span> attribute, <a id="cXXX.215"></a>which is a list of other <span class="FontName1">Person</span> instances, then your object model itself is a graph on which you can run various graph algorithms. There are, however, specialized ways of implementing graphs.<a id="cXXX.216"></a></p>
1059<p class="indent">In abstract terms, what we are generally looking for is a way of implementing the neighborhood function, <i>N</i>(<i>v</i>), so that <span class="FontName1">N[v]</span> is some form of container (or, in some cases, merely an iterable object) of the neighbors of <span class="FontName1">v</span>. Like so many other books on the subject, I will focus on the two most well-known representations, <i>adjacency lists</i> and <i>adjacency matrices</i>, because they are highly useful and general. For a discussion of alternatives, see the section “A Multitude of Representations†later in this chapter.</p>
1060<div class="Singlethin">
1061<p class="Heading4a"><b>BLACK BOX: DICT AND SET</b></p>
1062<p class="box-left">One technique covered <a id="cXXX.217"></a>in detail in most algorithm books, and usually taken for granted by Python programmers, is <i>hashing</i>. Hashing involves computing some often seemingly random integer value from an arbitrary object. This value can then be used, for example, as an index into an array (subject to some adjustments to make it fit the index range).</p>
1063<p class="box-left1">The standard hashing mechanism in Python is available through the <span class="FontName1">hash</span> function, which calls the <span class="FontName1">__hash__</span> method of an object:</p>
1064<p class="pre"><span class="FontName1">>>> hash(42)</span><br><span class="FontName1">42</span><br><span class="FontName1">>>> hash("Hello, world!")</span><br><span class="FontName1">-1886531940</span></p>
1065<p class="box-left1">This is the mechanism that is used in dictionaries, which are implemented using so-called hash tables. Sets are implemented using the same mechanism. The important thing is that the hash value can be constructed in essentially constant time. It’s constant with respect to the hash table size but linear as a function of the size of the object being hashed. If the array that is used behind the scenes is large enough, accessing it using a hash value is also Θ(1) in the average case. The worst-case behavior is Θ(<i>n</i>), unless we know the values beforehand and can write a custom hash function. Still, hashing is extremely efficient in practice.</p>
1066<p class="box-left1">What this means to us is that accessing elements of a <span class="FontName1">dict</span> or <span class="FontName1">set</span> can be assumed to take constant expected time, which makes them highly useful building blocks for more complex structures and algorithms.</p>
1067<p class="box-left1">Note that the <span class="FontName1">hash</span> function is specifically used for use in hash tables. For other uses of hashing, such as in cryptography, there is the standard library module <span class="FontName1">hashlib</span>.</p></div>
1068<p id="Sec9" class="Heading2">Adjacency Lists and the Like</p>
1069<p class="noindent">One of the most intuitive ways of implementing graphs is using adjacency lists. Basically, for each node, we can access a list (or set or other container or iterable) of its neighbors. Let’s take the simplest way of implementing this, assuming we have <i>n</i> nodes, numbered 0 . . . <i>n</i>–1.<a id="cXXX.218"></a></p>
1070<div class="notepara">
1071<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> Nodes can be any objects, of course, or have arbitrary labels or names. Using integers in the range 0 . . . <i>n</i>–1 can make many implementations easier, though, because the node numbers can easily be used as indices.</p></div>
1072<p class="indent">Each adjacency <a id="cXXX.219"></a>(or neighbor) list is then just a list of such numbers, and we can place the lists themselves into a main list of size <i>n</i>, indexable by the node numbers. Usually, the ordering of these lists is arbitrary, so we’re really talking about using lists to implement adjacency <i>sets</i>. The term <i>list</i> in this context is primarily historical. In Python we’re lucky enough to have a separate set type, which in many cases is a more natural choice.</p>
1073<p class="indent">For an example that will be used to illustrate the various graph representations, see <a href="#Fig3" id="_Fig3">Figure 2-3</a>.</p>
1074<div class="Figure" id="Fig3">
1075<p class="img-c"><img src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-03.jpg" alt="9781484200568_Fig02-03.jpg" width="327" height="183" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/9781484200568_Fig02-03.jpg"></p>
1076<p class="FigCapt"><span class="CaptNr"><a href="#_Fig3">Figure 2-3</a>.</span> A sample graph used to illustrate various graph representations</p></div>
1077<div class="notepara">
1078<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip</b> For tools to help you visualize your own graphs, see the sidebar “Graph Libraries†later in this chapter.</p></div>
1079<p class="indent">To begin with, assume that we have numbered the nodes, that is, <i>a</i> = 0, <i>b</i> = 1, and so forth. The graph can then be represented in a straightforward manner, as shown in <a id="_list1" href="#list1">Listing 2-1</a>. Just as a convenience, I have assigned the node numbers to variables with the same names as the node labels in the figure. You can, of course, just work with the numbers directly. Which adjacency set belongs to which node is indicated by the comments. If you want, take a minute to confirm that the representation does, indeed, correspond to the figure.</p>
1080<p class="listing"><a href="#_list1" id="list1"><b><i>Listing 2-1</i></b></a>. A Straightforward Adjacency Set Representation</p>
1081<pre><span class="FontName1">a, b, c, d, e, f, g, h = range(8)</span><br><span class="FontName1">N = [</span><br> <span class="FontName1">{b, c, d, e, f}, # a</span><br> <span class="FontName1">{c, e}, # b</span><br> <span class="FontName1">{d}, # c</span><br> <span class="FontName1">{e}, # d</span><br> <span class="FontName1">{f}, # e</span><br> <span class="FontName1">{c, g, h}, # f</span><br> <span class="FontName1">{f, h}, # g</span><br> <span class="FontName1">{f, g} # h</span><br><span class="FontName1">]</span></pre>
1082<div class="notepara">
1083<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Note</b> In Python versions prior to 2.7 (or 3.0), you would write set literals as <span class="FontName1">set([1, 2, 3])</span> rather than <span class="FontName1">{1, 2, 3}</span>. Note that an empty set is still written <span class="FontName1">set()</span> because <span class="FontName1">{}</span> is an empty dict.</p></div>
1084<p class="indent">The name <span class="FontName1">N</span> has been used here to correspond with the <i>N</i> function discussed earlier. In graph theory, <i>N</i>(<i>v</i>) represents the set of <i>v</i>’s neighbors. Similarly, in our code, <span class="FontName1">N[v]</span> is now a set of <span class="FontName1">v</span>’s neighbors. Assuming you have defined <span class="FontName1">N</span> as earlier in an interactive interpreter, you can now play around with the graph:</p>
1085<pre><span class="FontName1">>>> b in N[a] # Neighborhood membership</span><br><span class="FontName1">True</span><br><span class="FontName1">>>> len(N[f]) # Degree</span><br><span class="FontName1">3</span></pre>
1086<div class="notepara">
1087<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip</b> If you have some code in a source file, such as the graph definition in <a href="#list1">Listing 2-1</a>, and you want to explore it interactively as in the previous example, you can run <span class="FontName1">python</span> with the <span class="FontName1">-i</span> switch, like this:</p>
1088<pre><span class="FontName1">python -i listing_2_1.py</span></pre>
1089<p class="paraaftertitle1">This will run the source file and start an interactive interpreter that continues where the source file left of, with any global definitions available for your experimentation.</p></div>
1090<p class="indent">Another possible representation, which can have a bit less overhead in some cases, is to replace the adjacency sets with actual adjacency <i>lists</i>. For an example of this, see <a id="_list2" href="#list2">Listing 2-2</a>. The same operations are now available, except that membership checking is now Θ(<i>n</i>). This is a significant slowdown, but that is a problem only if you actually need it, of course. (If all your algorithm does is iterate over neighbors, using set objects would not only be pointless; the overhead would actually be detrimental to the constant factors of your implementation.)</p>
1091<p class="listing"><a href="#_list2" id="list2"><b><i>Listing 2-2</i></b></a>. Adjacency Lists</p>
1092<pre><span class="FontName1">a, b, c, d, e, f, g, h = range(8)</span><br><span class="FontName1">N = [</span><br> <span class="FontName1">[b, c, d, e, f], # a</span><br> <span class="FontName1">[c, e], # b</span><br> <span class="FontName1">[d], # c</span><br> <span class="FontName1">[e], # d</span><br> <span class="FontName1">[f], # e</span><br> <span class="FontName1">[c, g, h], # f</span><br> <span class="FontName1">[f, h], # g</span><br> <span class="FontName1">[f, g] # h</span><br><span class="FontName1">]</span></pre>
1093<p class="indent">It might be argued that this representation is really a collection of <i>adjacency arrays</i>, rather than adjacency <i>lists</i> in the classical sense, because Python’s list type is really a dynamic array behind the covers; see earlier “Black Box†sidebar about <span class="FontName1">list</span>. If you wanted, you could implement a linked list type and use that, rather than a Python list. That would allow you asymptotically cheaper inserts at arbitrary points in each list, but this is an operation you probably will not need, because you can just as easily append new neighbors at the end. The advantage of using <span class="FontName1">list</span> is that it is a well-tuned, fast data structure, as opposed to any list structure you could implement in pure Python.</p>
1094<p class="indent">A recurring theme when working with graphs is that the best representation depends on what you need to <i>do</i> with your graph. For example, using adjacency lists (or arrays) keeps the overhead low and lets you efficiently iterate over <i>N</i>(<i>v</i>) for any node <i>v</i>. However, checking whether <i>u</i> and <i>v</i> are neighbors is linear in the minimum of their degrees, which can be problematic if the graph is <i>dense</i>, that is, if it has many edges. In these cases, adjacency sets may be the way to go.</p>
1095<div class="notepara">
1096<p class="paraaftertitle1"><img src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg" alt="Image" width="15" height="15" data-mfp-src="/library/view/python-algorithms-mastering/9781484200551/images/sq.jpg"> <b>Tip</b> We’ve also seen that deleting objects from the middle of a Python <span class="FontName1">list</span> is costly. Deleting from the <i>end</i> of a <span class="FontName1">list</span> takes constant time, though. If you don’t care about the order of the neighbors, you can delete arbitrary neighbors in constant time by overwriting them with the one that is currently last in the adjacency list, before calling the <span class="FontName1">pop</span> method.</p></div>
1097<p class="indent">A slight variation on this would be to represent the neighbor sets as <i>sorted lists</i>. If you aren’t modifying the lists much, you can keep them sorted and use bisection (see the “Black Box†sidebar on <span class="FontName1">bisect</span> in <a href="9781484200568_Ch06.xhtml">Chapter 6</a>) to check for membership, which might lead to slightly less overhead in terms of memory use and iteration time but would lead to a membership check complexity of Θ(lg <i>k</i>), where <i>k</i> is the number of neighbors for the given node. (This is still very low. In practice, though, using the built-in <span class="FontName1">set</span> type is a lot less hassle.)</p>
1098<p class="indent">Yet <i>another</i> minor tweak on this idea is to use dicts instead of sets or lists. The neighbors would then be keys in this dict, and you’d be free to associate each neighbor (or out-edge) with some extra value, such as an edge weight. How this might look is shown in <a id="_list3" href="#list3">Listing 2-3</a>, with arbitrary edge weights added.</p>
1099<p class="listing"><a href="#_list3" id="list3"><b><i>Listing 2-3</i></b></a>. Adjacency dicts with Edge Weights</p>
1100<pre><span class="FontName1">a, b, c, d, e, f, g, h = range(8)</span><br><span class="FontName1">N = [</span><br> <span class="FontName1">{b:2, c:1, d:3, e:9, f:4}, # a</span><br> <span class="FontName1">{c:4, e:3}, # b</span><br> <span class="FontName1">{d:8}, # c</span><br> <span class="FontName1">{e:7}, # d</span><br> <span class="FontName1">{f:5}, # e</span><br> <span class="FontName1">{c:2, g:2, h:2}, # f</span><br> <span class="FontName1">{f:1, h:6}, # g</span><br> <span class="FontName1">{f:9, g:8} # h</span><br><span class="FontName1">]</span></pre>
1101<p class="indent">The adjacency dict version can be used just like the others, with the additional edge weight functionality:</p>
1102<pre><span class="FontName1">>>> b in N[a] # Neighborhood membership</span><br><span class="FontName1">True</span><br><span class="FontName1">>>> len(N[f]) # Degree</span><br><span class="FontName1">3</span><br><span class="FontName1">>>> N[a][b] # Edge weight for (a, b)</span><br><span class="FontName1">2</span></pre>
1103<p class="indent">If you want, you can use adjacency dicts even if you <i>don’t</i> have any useful edge weights or the like, of course (using, perhaps, <span class="FontName1">None</span>, or some other placeholder instead). This would give you the main advantages of the adjacency sets, but it would also work with very, very old versions of Python, which don’t have the set type.<a id="_Fn14" href="#Fn14"><sup>14</sup></a></p>
1104<p class="indent">Until now, the main collection containing our adjacency structures—be they lists, sets, or dicts—has been a list, indexed by the node number. A more flexible approach, allowing us to use arbitrary, hashable, node labels, is to use a dict as this main structure.<a id="_Fn15" href="#Fn15"><sup>15</sup></a> <a id="_list4" href="#list4">Listing 2-4</a> shows what a dict containing adjacency sets would look like. Note that nodes are now represented by characters.</p>
1105<p class="listing"><a href="#_list4" id="list4"><b><i>Listing 2-4</i></b></a>. A dict with Adjacency Sets</p>
1106<pre><span class="FontName1">N = {</span><br> <span class="FontName1">'a': set('bcdef'),</span><br> <span class="FontName1">'b': set('ce'),</span><br> <span class="FontName1">'c': set('d'),</span><br> <span class="FontName1">'d': set('e'),</span><br> <span class="FontName1">'e': set('f'),</span><br> <span class="FontName1">'f': set('cgh'),</span><br> <span class="FontName1">'g': set('fh'),</span><br> <span class="FontName1">'h': set('fg')</span><br><span class="FontName1">}</span></pre>
1107<div class="notepara">