Weekly Summary
Week 1
Lecture 1
date:
Lecture 2
date:
Week 2
Lecture 1
date:
Lecture 2
date: 23/04/24
- fachschaft
- compiler explorer
- Small assembly example; modern compilers are very clever at optimizing
- assertions, pre- and post conditions, invariants.
- fast power algorithm, it’s correctness proof
- rules for calculating time complexity of sequential programs
- a common reccurence relation pattern that comes up often,
- complexity of this reccurence relation and its proof
- Intro to Graphs, basic definitions
Week 3
Lecture 1
date:
Lecture 2
date: 30/04/24
- simply linked list
- splicing
- arrays
- assembly realization of array access via the website compiler explorer link with c++ and rust. Differences between c++ and rust.
- memory allocation in c++ with
alloc()
andfree()
- time complexity of array memory allocation in c++ with
alloc()
: an experiment pushBack()
andpopBack()
for arrays- their realization and complexity
Week 4
Lecture 1
date: 6/5/24
- introduction to amortized complexity
- amortized complexity analysis of
pushBack()
andpopBack()
operations: both
- amortized complexity analysis of
- stacks and queues - introductory discussions
- double-ended queues
- ring buffer implementation of a queue
- introduction to hashing
Lecture 2
date: 07/05/24
- division by a constant is optimized by the compiler (this cannot be done for division by a variable)
- book recommendation: Hacker’s Delight
- Hashing:
- intro & some applications
- some defs:
. : set of Elements of a certain type , that we want to store (or have stored) in a table and access via their keys. : number of memory slots. : total number of elements stored in the table. : Function that maps elements to their key values : the set of key values (the range of the key function) : the hashing function that maps key values to memory slots .
- pefect hashing: if there are more memory slots than possible key values,
can map each key to a single slot over-optimistic, we don’t have so much memory, since usually (Number of possible key values much greater than number of slots). - imperfect hashing:
s.t. but . This is called collision. - closed hashing: an example for imperfect hashing , where the elements of the table are simply linked lists, supporting the operations:
insert(e)
. Insert en elementremove(k)
. Remove an element whose key is , returningfind(k)
: Find element with the keyk
, return if found.- Time-complexities:
insert(e)
:remove(k)
:find(k)
:- worst case: bad hash function that maps all keys to the same slot !
- Stochastic analysis of random hash functions & proofs of (with an introductory discussion of birthday paradox):
- theorem: random hash function is likely to be perfect if
- theorem: random hash functions lead to lists of length
, if
- theorem: random hash function is likely to be perfect if
Week 5
Lecture 1
date: 13/5/24
- Universal hashing functions: definition & Theorem 1 holds also for universal hashing functions
- A family of universal hashing functions, proof of theorem 1 for these functions
- another example for a universal hashing function based on bit matrix multiplication
- example: tabulation hashing
- open hashing: a different way to resolve collisions, without using linked lists, instead looking for the next free table slot:
- advantage: contigious address values
less cache misses, faster. - bounded linear probing:
- insert & find algorithms, invariants.
- remove is more difficult
its implementation.
- advantage: contigious address values
- intro to sorting
- intro to insertion sort.
Lecture 2
date:
Week 6
Lecture 1
date: 20/05/24
- online: Heap data structure
Lecture 2
date: 21/05/24
- programming example with compiler explorer: fibonacci iterative and recursive implementation, gcc optimization. (short detour: call stack) compiler can automatically implement tail recursion.
- insertion algorithms:
- insertion sort: pseudo-code implementation.
- merge sort:
- divide and conquer,
- merging (
) - time-complexity of merge sort: assuming (without loss of generality)
leads to master theorem. (wlog: we can extend a list to be )
is the best we can do for comparison-based sortingdefinition of comparison-based sorting, fundamental operations
any such algorithm must at least be able to differentiate between all
permutations of the list lower-bound analysis via a comparison tree.partial integration
- quicksort: divide-and-conquer, but “reversed”
- complexity of quicksort (worst case is
- Pivot is always the median
- problem: finding the median (a good pivot) is not easy.
- complexity-analysis:
- complexity of quicksort (worst case is
Week 7
Lecture 1
date: 27/05/24
Lecture 2
date: 28/05/24
- example: the c++ compiler optimizes calculation of middle value by generating assembly for r + (r - l) / 2 instead of (l + r) / 2, in order to reduce chance of overflow.
- quicksort with tail-recursion to reduce stack depth (revisiion of the previous lecture)
- quickselect algorithm from quicksort: can be used to determine the median of a sequence in
. - sorting faster than
(without comparisons):- KSort (Bucketsort), array implementation of bucketsort (in-place).
Sort: Least Significant Digit Radix Sorting.
- binary search
- intermezzo: insertion sort in
: library sort link
- intermezzo: insertion sort in
Week 8
Lecture 1
date:
Lecture 2
date: 06/04/24
- c++ unique pointer implementation and move semantics
- (a, b) Trees - inserting
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date:
Week 1
Lecture 1
date:
Lecture 2
date: