· 6 years ago · Jun 08, 2019, 10:42 AM
1Lecture 1
2==========
3Transaction:
4 - a unit of execution that contains one or more read/write operations on a database
5 - the DBMS's abstract "view" of a user program interacting with a DB, i.e. a sequence of read/write operations
6
7Transaction properties (ACID): properties a DBMS must ensure for a transaction in order to maintain data in the presence of concurrent execution / system failures
8 - atomicity:
9 - a transaction is atomic
10 - either all the operations in the transaction are executed or none are
11
12 - consistency:
13 - a transaction must preserve the consistence of the DB after execution
14 - consistent: transactions do not violate the integrity constraints specified on the DB (e.g. restrictions declared via CREATE TABLE statements)
15
16 - isolation:
17 - a transaction is protected from the effects of concurrently scheduling other transactions
18 - i.e. users must not deal with arbitrary effects produced by other concurrently running transactions (a transaction is seen as if it were in single-user mode)
19
20 - durability:
21 - commited changes are persisted to the DB
22 - the effects of a commited transaction should persist even if a system crash occurs before all the changes were written to disk
23
24Write-Ahead Log property:
25 - changes are written to the log (on the disk) before being reflected in the DB
26 - ensures atomicity and durability
27
28Interleaved Execution:
29 - 2 transactions only read a data object => no conflict, order of execution not important
30 - 2 transactions read/write different data objects => no conflict, order of execution not important
31 - 1 transaction writes and 1 transaction reads/writes on the same data object => conflict, order of execution is important
32
33 Conflicts:
34 - WR conflict: T2 is reading a data object previously written by T1 (dirty reads)
35 - RW conflict: T2 is writing a data object previously read by T1 (unrepeatable reads)
36 - WW conflict: T2 is writing a data object previously written by T1 (overwriting uncommited data)
37
38Lecture 2
39==========
40Schedule: a list of operations (read/write/commit/abort) of n transactions with the property that the order of the operations in each transaction is preserved
41
42Serial schedule: a schedule in which the actions of different transactions are not interleaved
43Non-serial schedule: a schedule in which the actions of different transactions are interleaved
44
45Serializability:
46 - C - set of transactions
47 - Sch(C) - set of schedules for C
48
49 - Serializable schedule:
50 - a schedule S from Sch(C) is serializable if the effect of S on a consistent DB instance is identical to the effect of some serial schedule S0 from Sch(C)
51
52 - Objective:
53 - finding interleaved schedules that permit the transactions to execute concurrently without interfering with each other (such schedules result in a correct DB state)
54
55 - The order of read and write operations is important
56
57 - Actions that cannot be swapped in a schedule:
58 - actions belonging to the same transaction
59 - actions in different transactions operating on the same object if at least one of them is a write
60
61 2 schedules are conflict equivalent if conflict(S1) = conflict(S2), i.e.:
62 - S1 and S2 contain the same operations of the same transactions
63 - every pair of conflicting operators is ordered the same in S1 and S2
64
65 Conflict serializability:
66 - C - set of transactions
67 - Sch(C) - set of schedules for C
68 - Let S be a schedule in Sch(C)
69 - S is conflict serializable if there exists a serial schedule S0 from Sch(C) such that conflict(S) = conflict(S0), i.e. S is conflict equivalent to some serial schedule.
70
71 A schedule S from Sch(C) is conflict serializable if and only if its precedence graph is acyclic.
72
73 Precedence graph:
74 1. Create a node Ti for every commited transaction in the schedule
75 2. Arc (Ti, Tj) if Tj executes Read(A) after a Write(A) by Ti
76 3. Arc(Ti, Tj) if Tj executes Write(A) after a Read(A) by Ti
77 4. Arc(Ti, Tj) if Tj executes Write(A) after a Write(A) by Ti
78 S is conflict serializable if and only if the resulting graph is acyclic
79
80 - Every conflict serializable schedule is serializable (when items can only be updated, no insert/delete)
81 - There are serializable schedules that are not conflict serializable
82 - Conflict serializability is a sufficient condition for serializability but it is not a necessary one
83
84 View serializability:
85 - sufficient condition for serializability
86 - based on view-equivalence
87
88 S1 and S2 are view equivalent:
89 - if Ti reads the initial value V from S1, then Ti also reads the initial value of V in S2
90 - if Ti reads the value of V written by Tj in S1, then Ti also reads the value of V written by Tj in S2
91 - if Ti writes the final value of V in S1, then Ti also writes the final value of V in S2
92
93 A schedule S is view serializable if there exists a serial schedule S0 from Sch(C) such that S is view equivalent to S0
94
95 Recoverable schedules:
96 - a schedule in which a transaction T commits only after all transactions whose changes T read commit
97
98 A schedule in which a transaction T is reading only changes of commited transactions is said to avoid cascading aborts
99 Avoiding cascading aborts => recoverable schedules
100
101 Lock-Based concurrency control:
102 - technique used to guarantee serializable and recoverable schedules
103 - lock:
104 - tool used by the transaction manager to control concurrent access to data
105 - prevents a transaction from accessing a data object while another transaction accesses it
106
107 - transaction protocol:
108 - a set of rules enforced by the transaction manager and obeyed by all transactions
109 - example: simple protocol: before a transaction can read/write an object, it must acquire the appropriate lock on that object
110 - locks in conjuction with transaction protocols allow interleaved executions
111
112 SLock (shared/read lock):
113 - only read an object, not write it
114 XLock (exclusive/write lock):
115 - read/write an object
116
117 If a transaction holds a SLock on an object, other transactions can acquire an SLock on that object, but not XLock.
118 Lock upgrade:
119 - An SLock granted to a transaction can be upgraded to an XLock
120
121 Transactions are issuing lock requests to the lock manager
122 Locks are held until being explicitly released by transactions
123 Lock acquire/lock release requests are automatically inserted into transactions by the DBMS
124 Locking/unlocking - atomic operations
125
126 Lock table:
127 - structure used by the lock manager to keep track of granted locks/lock requests
128 Entry in the lock table (corresponding to one data object):
129 - number of transactions holding a lock on that object
130 - lock type (SLock/XLock)
131 - pointer to a queue of lock requests
132
133 Transactions table:
134 - structure maintainted by the DBMS
135 - one entry per transaction
136 - keeps a list of locks held by every transaction
137
138 Strict Two-Phase Locking (2PL):
139 - all the locks held by a transaction are automatically released when it completes execution
140 - Strict 2PL allows only serializable schedules
141
142 Deadlocks: lock-based concurrency control techniques can lead to deadlocks
143 - cycle of transactions waiting for one another to release a locked resource
144 - normal execution can no longer continue without an external intervention (i.e. until the deadlock is resolved)
145
146 Deadlock management:
147 - prevention:
148 - assign transactions timestamp-based priorities
149 2 policies:
150 - Wait-Die:
151 - if Ti.priority > Tj.priority, Ti can wait; otherwise, Ti is aborted
152 - Wound-Wait:
153 - if Ti.priority > Tj.priority, Tj is aborted; otherwise Ti can wait
154 - if an aborted transaction is restarted, it's assigned its original timestamp
155
156 a. Waits-for graph:
157 - 1 node per active transaction
158 - arc from Ti to Tj if Ti is waiting for Tj to release a lock
159 Cycle in the lock => deadlock
160 DBMS periodically checks if there are cycles in the waits-for graph
161
162 b. Timeout mechanism:
163 - very simple, practical method
164 - if a transaction T has been waiting for too long for a lock on an object, a deadlock is assumed to exist and T is terminated
165
166 Choosing the deadlock victim:
167 - number of locks held
168 - number of objects modified by the transaction
169 ...
170 The policy should be "fair", i.e. if a transaction has been chosen as the victim several times, it should eventually be allowed to proceed
171 - detection:
172 - allow deadlocks to occur and resolve them when they arise
173
174 2PL:
175 - once a transaction releases a lock, it cannot request other locks
176
177 Phase 1 (growing phase):
178 - transaction acquires locks
179
180 Phase 2 (shrinking phase):
181 - transaction releases locks
182
183 If all transactions in C obey 2PL, then any schedule S from Sch(C) that completes normally is serializable
184
185 Transaction Ti wrote object A => transaction Tj can read/write A only after Ti's completion (commit/abort)
186 Strict schedules => Avoiding cascading aborts => recoverable schedules
187
188 Strict 2PL allows only strict schedules
189
190Lecture 3
191==========
192The Phantom problem:
193 - in the presence of insert operations, conflict serializability does not guarantee serializability
194
195Isolation levels:
196 - determine the degree to which a transaction is isolated from the changes made by other concurrently running transactions
197
198 4 isolation levels:
199 - read uncommited
200 - read commited
201 - repeatable read
202 - serializable
203
204 Read uncommited:
205 - a transaction must acquire an exclusive lock prior to writing an object
206 - no locks are requested when reading objects
207 - XLocks are released at the end of the transaction
208 - Lowest degree of isolation
209
210 Read commited:
211 - a transaction must acquire an XLock when writing an object
212 - a transaction must acquire a SLock when reading an object
213 - SLock released instantly
214 - XLock released at the end of the transaction
215
216 Repeatable read:
217 - XLock when writing
218 - SLock when reading
219 - SLocks and XLocks are released at the end of the transaction
220
221 Serializable:
222 - Acquire locks on objects before reading/writing
223 - A transaction can also acquire locks on sets of objects that must remain unmodified
224 - Locks are released at the end of the transaction
225 - Highest degree of isolation
226
227Lecture 4
228==========
229Recovery manager ensures 2 important properties of transactions:
230 - atomicity (effects of uncommited transactions are undone)
231 - durability (effects of commited transaction survive system crashes)
232
233 file:///C:/Users/necso/Desktop/DBMS/Lectures&Seminaries/Week4/DBMSs_lecture4.pdf (slide 3 - picture)
234
235Transaction failures:
236 - system failure (hardware failures, bugs in the OS...; all running transactions terminate):
237 - internal memory lost
238 - external memory not affected
239
240 - application error (divison by 0, infinte loop etc)
241
242 - action by the Transaction Manager:
243 - a transaction is chosen as the deadlock victim and terminated
244 - the transaction might execute successfully if executed again
245
246 - self-abort:
247 - based on some computations, a transaction can decide to terminate and undo its actions
248 - special statements (e.g. ABORT, ROLLBACK)
249
250Normal execution:
251 - transaction:
252 - reads/writes DB objects
253 - read DB object O:
254 - bring O from disk into a frame in the Buffer Pool (BP)
255 - copy O's value into a program variable
256 - write DB object O:
257 - modify an in-memory copy of O (in the BP)
258 - write the in-memory copy to disk
259
260Buffer Management:
261 - higher level layer L in the DBMS:
262 - asks the BM for page P
263 - if P not in BP, BM brings it into a frame F in the BP
264 - when P is no longer needed:
265 - L releases P
266 - the BM is notified, so F can be reused
267 - if P has been modified, the BM is notified and propagates the changes in the BP to the disk
268
269 (continuation slide 10 - le-am facut pe-astea semestrul trecut)
270
271Writing objects: transaction T changes Object O (in the BP)
272 - steal approach:
273 - T's changes can be written to disk while T is in progress
274
275 Advantage:
276 - changes of aborted transactions don't have to be undone
277 Drawback:
278 - assumption: all pages modified by active transactions can fit in the BP
279 - no-steal approach:
280 - T's changes cannot be written to disk before T commits
281
282 - force approach:
283 - T's changes are immediately forced to disk when T commits
284
285 Advantage:
286 - actions of commited transactions don't have to be redone
287 Drawback:
288 - can result in excessive I/O
289
290 - no-force approach:
291 - T's changes are not forced to disk when T commits
292
293 Steal, no-force approach: used by most systems
294
295ARIES:
296 - recovery algorithm
297 - steal, no-force
298 - three phases:
299 - analysis
300 - redo
301 - undo
302
303 - fundamental principle: Write-Ahead logging
304 - a change to an object O is first recorded in the log (e.g. in log record LR)
305 - LR must be written to disk before the change to O is written to disk
306
307 Analysis:
308 - determine active transactions at the time of the crash
309 - determine dirty pages (pages in BP whose changes have not been yet written to disk)
310
311 Redo:
312 - reapply all changes (in normal order) (starting from a certain record in the log), i.e. bring the DB to the state it was in when the crash occured
313
314 Undo:
315 - undo changes of uncommited transactions (in reverse order)
316
317 System restarts after a crash:
318 Restart: 3 phases:
319 - Analysis:
320 - reconstructs state at the most recent checkpoint
321 - scans the log forward from the most recent checkpoint
322
323 - identifies:
324 - active transactions at the time of crash (to be undone)
325 - potentially dirty pages at the time of crash
326 - the starting point for Redo
327
328 - Redo:
329 - repeats history (reapplies changes to dirty pages)
330 - all updates are reapplied (regardless of whether that transaction commited or not)
331 - starting point is determined during Analysis
332 - scans the log forward until the last record
333
334 - Undo:
335 - effects of active transactions at the time of crash are undone
336 - changes are undone in the opposite order
337
338Security (End of lecture 5, lecture 6)
339========
340- DB protection:
341 - security:
342 - protecting the data against unauthorized users
343 - users have the right to do what they are trying to do
344
345 - integrity:
346 - protecting the data against authorized users
347 - the operations users are trying to execute are correct
348
349Aspects related to security:
350 - legal, ethical:
351 - a person searches for a password or accidentally finds such a password and uses it
352 - physical controls:
353 - e.g. the computer room is locked/guarded
354 - software controls:
355 - files protection
356 - operational problems:
357 - if a password mechanism is used, how are the passwords kept secret and how often should a user change his/her password?
358
359Data security/integrity - similarities:
360 - system enforces certain constraints (that users cannot violate)
361 - constraints are:
362 - specified in a declarative language
363 - saved in the system catalog
364 - the system monitors user's actions to enforce the specified constraints
365
366DBMS's authorization subsystem (security subsystem):
367 - checks any given access request against the applicable constraints
368 - access request consists of:
369 - requested object
370 - requested operation
371 - requesting user
372
373 - the system must recognize the source of request:
374 - authentication mechanism:
375 - user && password
376 - fingerprint readers, voice verifiers, retina scanners, etc
377
378 - main approaches to data security in a DBMS:
379 - discretionary control:
380 - users have different access right on different objects
381 - the operations users are allowed are explicitly specified
382 - anything not explicitly authorized is forbidden
383
384 - mandatory control:
385 - every object has a classification (e.g. top secret, secret, etc)
386 - every user has a clearance level (same options as for classification levels)
387 - levels - strict ordering
388
389 Bell and La Padula rules:
390 - user x can retrieve object y if the clearance level of x is >= classification level of y
391 - user x can update object y if cleareance of x == classification of y
392
393 DB user management:
394 - user:
395 - username
396 - password
397 - additional information: time intervals during which access is allowed; privileges; answers to some questions
398
399 SQL statements: CREATE/ALTER/DROP user
400 Privileges:
401 - GRANT/REVOKE privilege_list ON component TO/FROM user_list [options]
402
403 DB admin is monitoring the DB and keeping track of all operations performed by users via audit trails that contain:
404 - executed SQL statements
405 - user names, IPs, dates and times
406 - affected objects
407 - old values and new values for changed records
408
409 SQL injection:
410 - application -> execution of a SQL statement: fixed statement, statement that is generated using input data from the user
411 - a statement can be changed (when being generated) due to data supplied by the user
412 - such a change is attemped when an user is trying to obtain additional access rights
413 - the user inputs code into input variables; the code is concatenated with SQL statements and executed
414 - the DBMS executes all syntactically valid statements
415
416 1. Changing an authentication statement:
417 - errors when executing the statement -> use error messages in subsequent executions
418
419 - authenticating with an username && password:
420 SELECT ... WHERE user = "uname" AND password = "upassword"
421 - if the statement returns at least 1 record, authentication is successful
422
423 Values that change the SQL statement (modify the action intended by the programmer):
424 - user: a; password: " OR 1 = 1 --
425
426 - SELECT ... WHERE ... = "v" ...
427 Possible value entered by the user:
428 x" OR 1 = (SELECT COUNT(*) FROM table) --
429 error -> change table name
430 - one can obtain the name of a table
431
432 x" AND user IS NULL --
433 error -> change column name
434 - one can obtain the name of a column
435
436 x" AND user LIKE "%pop%" --
437 error/no data -> change the name of the column or the string in the condition
438 - one can obtain the name of a user
439
440 0; DROP TABLE users
441
442 Prevention:
443 - data validation:
444 - regular expressions, allow only certain types of characters in the input data
445
446 - modify problematic characters:
447 - double the single/double quotes
448 - precede single and double quotes with \
449 - use parametrized statements:
450 SELECT ... WHERE user = ? AND password = ?, where ? is a parameter
451 - such a statement has a collection of parameters
452
453 Data enconding:
454 - if an intruder gains physical access to the server or the data center, we need to encode data
455 - storing/transmitting encoded data => stolen data is illegible for the intruder
456 - some DBMSs store data in an encoded form; using the data is impossible/extremely difficult (decoding cost is enormous)
457
458 Codes and ciphers:
459 - code:
460 - replace one word or phrase with another word/number/symbol
461 - cipher:
462 - replace letters with a letter/number/symbol
463
464 Steganography:
465 - hide the existence of the messages
466
467 Cryptography:
468 - hide the meaning of the message - encryption
469 - transposition:
470 - rearrange letters in the message (create anagrams)
471 - every letter keeps its identity, but changes position
472
473 - substitution:
474 - pair the letter of the alphabet (randomly)
475 - replace each letter in the message with its pair
476 - every letter retains its position, but changes its identity
477
478 - plaintext:
479 - message before encryption
480
481 - ciphertext:
482 - message after encryption
483
484 - plain alphabet:
485 - alphabet used to write the message
486
487 - cipher alphabet:
488 - alphabet used to encrypt the message
489
490 Example:
491 - shift the original alphabet by 1 position (replace A with B, B with C, etc)
492
493 Algorithm:
494 - general encryption method
495 - not secret
496
497 Key:
498 - details of a particular encryption
499 - must be kept secret
500 - large number of keys
501
502 The key distribution problem:
503 - if 2 parties want to exchange a message, they must first exchange a secret key:
504 - meet in person, use couriers
505 - logistical difficulties, costs way too much
506
507 - Whitfield Diffie, Martin Hellman, Ralph Merkle:
508 - one-way functions in modular arithmetic
509
510 One-way function: Y^x(mod P) (lecture 6, slide 27)
511
512 Data encoding methods:
513
514 1. Use a secret encryption key:
515 - data: disciplina baze de date
516 - secret key: student
517
518 lecture 6, slide 28, explicat bines
519
520 2. RSA:
521 - r = p * q
522 - r can pe publicly announced
523 - p, q primes
524
525 2. Choose a number such that:
526 - c co-prime with (p-1)(q-1)
527 c > (p-1)(q-1)
528
529 - c is used as a public encryption key
530
531 3. Determine a secret decryption key d that verifies:
532 d * c = 1 % (p-1)(q-1)
533
534 Encryption:
535 - determine code v:
536 v = m^c % r
537
538 Decryption:
539 v^d % r
540
541Lecture 7:
542 index I with search key <a, b, c>
543
544 a = 10 AND b = 5 AND c = 2 => B+ tree index, Hash index
545 a = 10 AND b = 5 => B+ tree index
546 b = 5 => Nothing
547 b = 5 and c = 2 => Nothing
548 d = 2 => Nothing
549 a = 20 AND b = 10 AND c = 5 AND d > 11 => B+ tree prtly, Hash index partly
550
551 Tree index: I matches C if C contains exactly one term for each attribute in a prefix of the search key of I
552 Ex: if key <a, b, c>, prefix: <a> or <a, b>
553
554 Hash index: I matches C if C contains exactly one term for each attribute in the search key of I
555 Ex: if key <a, b, c> then a = 1 AND b = 1 AND c = 1
556
557 Joins - implementation techniques:
558 - iteration:
559 - simple/page oriented nested loops join
560 - block nested loops join
561 - indexing:
562 - index nested loops join
563 - partitioning:
564 - sort-merge join
565 - hash join
566
567 Equality join:
568 - join condition : Ei = Sj
569
570 Exams E:
571 - M pages
572 - pE records/page
573
574 Students S:
575 - N pages
576 - pS records/page
577
578 Evaluation: number of I/O operation
579
580 Simple nested loops join:
581 - foreach tuple in foreach tuple
582 - Total cost: M + pE * M * N
583
584 Page-oriented loops join:
585 - foreach page in foreach page
586 - small improvement from simple nested loops join
587 - Total cost: M + M * N
588
589 Index nested loops join:
590 - Total cost: M + ( (M*pE) * cost of finding corresponding records in S)
591
592 Sorting:
593 - if data fits into available main memory:
594 - use an internal sorting algorithm
595
596 - if data doesn't fit into available main memory:
597 - use an external sorting algorithm:
598 - objective: minimize cost of disk access
599 - create runs: sort records in the data source that fit into main memory
600 - place runs into temporary files
601 - merge runs using an external sorting algorithm
602
603 Simple Two-Way merge sort:
604 - 3 buffer pages
605 - repeated passes over the data
606 - even large data sources can be sorted with a small amount of main memory
607
608 N = number of pages
609 Total cost: 2 * N * ([log2(N)] + 1) I/Os
610
611 External Two-Way merge sort:
612 - simple 2way merge sort - buffer pages are not used effectively
613
614 B - buffer pages
615 N - number of pages to sort
616 Total cost: 2 * N * ([log(B-1)[N/B]] + 1) I/Os
617
618 Sort-Merge join:
619 M - no of pages on object Exams
620 N - no of pages on object Students
621 E join S =>
622 - cost: O(MlogM) to sort E
623 - cost: O(NlogN) to sort S
624 - merge cost:
625 - M + N I/Os
626 - M * N I/Os worst case
627
628 Hash join:
629 - partitioning phase (building phase)
630 - probing phase (matching phase)
631
632 Cost:
633 partitioning - E,S - read,written once: 2*(M+N) I/Os
634 probing - scan each partition once: M+N I/Os
635 => total cost: 3 * (M+N) I/Os
636
637Lecture 10
638==========
639- linear trees:
640 - at least one child of a join node is a base relation
641- left-deep trees:
642 - right child of each join node is a base relation
643- bushy tree:
644 - not linear
645
646Lecture 11
647==========
648Centralized DB systems:
649 - all data: single site
650 - transaction processing: sequential
651 - processor fails => system fails
652
653Distributed systems:
654 - multiple processors (+ memories)
655 - autonomous components
656 - data is stored at several sites
657 - each site is managed by a DBMS that can run independently of the other sites
658 - location of data - impact on:
659 - query processing
660 - query optimization
661 - concurrency control
662 - recovery
663
664 - impact of data distribution - transparent:
665 - users should be able to write queries without knowing/specifying the actual location of the data
666 - cost-based query optimization that takes into account communication costs && differences in local computation costs
667
668 - distributed transaction atomicity:
669 - users should be able to write transactions accessing multiple sites just as they would write local transaction
670 - transactions are atomic:
671 - all changes persist if the transaction commits, none persist if it aborts
672
673 - Homogenous:
674 - every site runs the same DBMS software
675
676 - Heterogenous (multi db system):
677 - different sites runs different DBMSs
678
679 - Gateway:
680 - a software component that accepts requests (in some subset of SQL), submits them to local DBMSs, and requests the answers to the requestors
681
682 Challenges (for distributed DBs):
683 - deciding where to store data
684 - 2 sub-problems:
685 - fragmentation
686 - allocation
687 - distributed concurrency control:
688 - transaction schedules must be globally serializable
689 - distributed deadlock management
690 - reliability of distributed databases:
691 - transaction failures:
692 - one or more processors may fail
693 - the network may fail
694 - data must be synchronized
695
696 Client-Server systems:
697 - the client submits requests to a single site
698 - query processing is done at the server
699 Collaborating Server Systems:
700 - queries can span several sites
701
702 Storing data:
703 - relations stored at severeal sites
704 - accessing relations at remote sites => message-passing costs
705 - reduce costs:
706 - fragment a relation across several sites:
707 - store fragments where they are most often accessed
708 - replicate a relation at each site where it's needed the most
709
710 Fragmentation:
711 - break a relation into smaller relations (fragments)
712 - store the fragments instead of the relation itself
713
714 3 types:
715 - horizontal:
716 - fragment: subset of rows
717 - n selection predicates => n fragments (n record sets)
718 - fragments should be disjoint
719 - reconstruct the original relation:
720 - take the union of the horizontal fragments
721
722 - vertical:
723 - fragment: subset of columns
724 - performed using projection operators:
725 - must obtain a good decomposition
726 - reconstruction operator - natural join
727
728 - hybrid:
729 - horizontal + vertical fragmentation
730
731 Replication:
732 - store multiple copies of:
733 - a relation
734 - a relation fragment
735
736 How are the copies of the data kept current when the relation is changed?
737 - types:
738 - synchronous:
739 - transaction T modified relation R
740 - before T commits, it synchronizes all copies of R
741
742 - transaction T, object O (with several copies)
743 - no matter which copy of O it accesses, T should see the same value
744 - 2 basic techniques:
745 - voting:
746 - to modify O, T must write a majority of its copies
747 - when reading O, T must read enough copies to make sure it's seeing at least one current copy
748 - e.g. 10 copies; 7 copies writte; 4 copies should be read
749 - each copy has a versino number
750 - not an attractive approach in most, cases, because reads are usually much more common than writes
751
752 - read-any write-all:
753 - modify O:
754 - T must write all copies
755 - read O:
756 T can read any copy
757 - fast reads, slower copies (relative to the voting technique)
758 - most common approach to synchronous replication
759
760 Costs:
761 - before an update transaction T can commit, it must lock all copies of the modified relation/fragment
762 - T sends lock requests to remote sites
763 - while waiting for the response, T hold on to other locks
764 - site/link failures:
765 - T cannot commit until the network/sites are back up
766 - no failures, locks immediately obtained:
767 - T must follow an expensive commit protocol
768
769 => asynchronous replication is more used
770
771 - asynchronous:
772 - transaction T modifies relation R
773 - R's copies are synchronized periodically
774 - i.e. it's possible that some of R's copies are outdated for brief periods of time
775 - distributed data independence is compromised
776 - a lot of current systems use this approach
777
778 - readers can just access one copy of O
779 - 2 approaches:
780 - difference between the 2 approaches: number of updatable copies (master copies)
781
782 - primary site replication:
783 - exactly one copy of an object O is designed as the primary/master copy
784 - primary copy is published
785 - other sites can subscribe to (fragments of) the primary copy - secondary copies
786 - secondary copies cannot be updated
787
788 - main issue:
789 - how are the changes to the primary copy propagated to the secondary copies?
790 - capture the changes made by commited transactions
791 - apply these changes to the secondary copies
792
793 - capture:
794 - log-based capture:
795 - the log (used for recovery) is used to generate the Change Data Table (CDT) structure
796 - write log tail to stable storage => write all records affecting replicated relations to the CDT
797 - changes of aborted transactions must be removed from the CDT
798 - in the end, CDT contains only update log records of commited transactions
799
800 - procedural capture:
801 - capture is performed through a procedure that is automatically invoked (e.g. a trigger)
802 - typically, the procedure just takes a snapshot of the primary copy
803
804 - log based:
805 - smaller overhead
806 - smaller delay
807 - but it depends on proprietary log details
808
809 - apply:
810 - applies changes collected in the Capture step (from the CDT/snapshot) to the secondary copies
811 - primary site can continuously send the CDT
812 - secondary sites can periodically request a snapshot or the CDT from the primary site
813 - each secondary site runs a copy of the apply process
814
815 - log-based capture + continuous apply:
816 - minimizes delay in propagating changes
817 - procedural capture + application-driven apply:
818 - most flexible way to process changes
819
820 - peer-to-peer replication:
821 - several copies of object O can be master copies (i.e. updatable)
822 - changes to master copy must be propagated to the other copies
823 - conflict resolution strategy:
824 - 2 master copies are changed in a conflicting manner:
825 - in general - ad hoc approaches to conflict resolution
826
827 - best utilized when conflicts do not arise:
828 - each master site owns a fragment (usually a horizontal fragment):
829 - any 2 fragments updatable by different master sites are disjoint
830 - updating rights are held by one master site at a time