Resource protection is all about ensuring that two threads don't access the same resource at the same time. The implications of two threads accessing a TList for example could be quite bad. One checks for an item count... it finds the list has a single item so it reads it. Another thread checks for the item count from the same list... it also finds the list has a single item... in the meantime, the first thread deletes the object its just grabbed from the list. The second thread goes for it and consequently goes bang.
To avoid this kind of situation, various tools are provided... Synchronisation and object locking, we've already covered... that leaves 'TCriticalSection' and the nicely named 'TMultiReadExclusiveWriteSynchronizer'.
These two are essentially the same but with one subtle difference... the long one can help you avoid deadlock by allowing you to specify what kind of operation you are going to perform. But first, lets look at the short one... 'TCriticalSection'.
'TCriticalSection' can be considered to be a token. Without a token, your thread can't enter the code it protects, and there is only a single token meaning that only a single thread can enter code protected by a critical section at any one time. In our revised job processing thread, we have these two blocks of code.
Code:
fJobListCS.acquire; try fJobList.add(aJob); finally fJobListCS.release; end;
Code:
fJobListCS.acquire; try fJob:=TMyJobDescription(fJobList[0]); fJobList.delete(0); finally fJobListCS.release; end;
At this point, just in case its not clear... when you call a method of a thread object, even though the method belongs to a thread running its its own context, the method will be executed within the context of the calling thread. So whilst we will only ever remove jobs from the queue within the context of the job processing thread, any other thread (the main VCL thread included) that has access to the job processor can add a job.. thats why protecting the list when we add a job is so important.
Thankfully, using a critical section is pretty straight forward as you can see from this example. The methods we use are 'acquire' to obtain control of the critical section and 'release' to relinquish control.
Critical sections are not without their problems... they take time and can present a processing bottleneck, especially if you have alot of threads all trying to access common data through the same critical section. But these issues pale into insignificance when compared with deadlock. Deadlock occurs when two (or more) threads try to access the same critical section and for whatever reason, one thread is waiting for something else to happen before it releases the critical section for the others to use.
There are two key ways this can happen. The first is when you unexpectedly leave a block of code and fail to release the critical section. This will most likely be courtesy of an exception. For that reason you should ALWAYS use try...finally when using critical sections as illustrated in the example. Failure to do so could result in your thread retaining control of a critical section when it should have relinquished it.
The other way you can end up with deadlock is when you have multiple critical sections protecting different data sets.
Code:
procedure TMyThread1.execute; begin ... fGlobalDataQueueCS.acquire; try fGlobalResultBufferCS.acquire; try ... finally fGlobalResultBufferCS.release; end; finally fGlobalDataQueueCS.release; end; ... end; procedure TMyThread2.execute; begin ... fGlobalResultBufferCS.acquire; try fGlobalDataQueueCS.acquire; try ... finally fGlobalDataQueueCS.release; end; finally fGlobalResultBufferCS.release; end; ... end;
To avoid this scenario, if you use multiple critical sections to protect different resources, always make sure that the critical sections get nested in the same order.
There are other ways to end up in a deadlock situation, but by thinking ahead and planning your threads and how they interact with common objects, you can practically reduce the chances of it happening to 0. To help make your life a little easy in this respect, we come to the aptly named 'TMultiReadExclusiveWriteSynchronizer'.
So what does such a nicely named object do... well, like a critical section, its purpose is to protect resources from simultaneous access by multiple threads... unlike a critical section however, it allows you to do that according to the type of operation you are going to perform.
If you have multiple threads and they all want to read the protected resource... no problem, they can all read it at the same time providing another thread isn't writing to it... if they want to write to it however, they will have to wait until everyone has finished reading and then they will have to take it in turns as only a single thread is allowed to write to the protected resource at once.
Code:
... // Reading from the protected resource fOurMultiReadExclusiveWriteSynchronizer.beginRead; try // Read from the protected resource finally fOurMultiReadExclusiveWriteSynchronizer.endRead; end; ... // Writing to the protected resource fOurMultiReadExclusiveWriteSynchronizer.beginWrite; try // Write to the protected resource finally fOurMultiReadExclusiveWriteSynchronizer.endWrite; end; ...
vBulletin Message