Recall the issue of deadlock? That's when many people wants to access a resource at the same time - we impose concurrency control at different isolation levels and reduce concurrency issues that could occur.
There's a very similar issue in networks. It's multiple access problem. Let me outline the context, and the key problem that MAC (Medium Access Control) has to solve.
There's a central hub called broadcast channel which can broadcast someone's message to everyone that's connected to the channel. We call each entity that's connected to the broadcast channel as a node. An issue is that when two nodes try sending a message, both messages become corrupt and not be broadcasted, and they will have to be retransmitted.
To add clarity to the discussion, we formulate the problem as follows:
There are N nodes in the system, connected to the broadcast channel. Broadcast channel supports R bits per second bandwidth.
The solution should preferrably ensure the following conditions:
1. If there's only one node sending a message, the message should be sent at R bits per second.
2. On average, a node should have effective end-to-end throughput of R/N bits per second.
There are three categories of solutions to this:
Channelisation (No collision, good for high load)
Time Division Multiplexing (TDM): we give each node a chance to send message for each time slot, each time slot being amount of time taken to send each frame (link layer packet) - that's L/R seconds. This repeats. So we have one node speaking at any time, across all the frequencies.
Frequency Division Multiplexing (FDM): we give each node a frequency range to talk in. So there can be multiple nodes (at max N nodes) talking at the same time.
Evaluating TDM, FDM
1. If there's only one node sending a message, the message should be sent at R bits per second.
=> TDM, FDM do not meet this condition.
2. On average, a node should have effective end-to-end throughput of R/N bits per second.
=> TDM, FDM meet this condition perfectly (fair, R/N bps, no collisions)
Randomisation (Collision, good for low load)
Slotted ALOHA:
Precondition:
- Similarly to TDM, we have time slots (L/R seconds).
- Each node must send a frame (link layer packet) at the start of a time slot.
- Collision (>=two frames collide) in a slot -> all nodes detect this.
How it works:
- At start of each time slot, each node that has data to send sends a frame, at a probability p (0<=p<=1)
Reason for collision:
- When two nodes happen to send data at the same time.
Carrier Sensing Multiple Access (CSMA):
Precondition:
- There's no time slot
- Each node can sense who's talking (by sensing an electronic signal from the broadcast channel), and will not send a message if it senses someone else talking.
How it works:
- There is no randomisation here.
- A node will send message if there's noone talking
Reason for collision:
- End-to-end Channel Propagation Delay (time taken for a node to detect someone else talking)
- It means two nodes can start talking together, or even after some node started talking.
CSMA with CD (Collision Detection):
Precondition:
- There is no time slot
- Each node can detect message of another signal coming to it, and will immediately stop sending message if that detection occurs.
How it works:
- A node will send message, but stop as soon as it senses that someone is talking.
Evaluating Slotted ALOHA, CSMA, CSMA/CD
1. If there's only one node sending a message, the message should be sent at R bits per second.
=> All of them meet this condition
2. On average, a node should have effective end-to-end throughput of R/N bits per second.
=> No, this is very random. All of them has chance for collision.
Taking turns (focus on fairness)
Token passing: each node gets a turn to talk when they get a token
=> token overhead, latency, single point of failure
=> meets R/N bps goal.
Reservation: At the start, each node sends bits if they want to send a message ('reservation'). Then they speak in order. Immediately after this batch of transmission, there's reservation again, then a batch of transmission. This repeats.
=> overhead, latency
=> meets both "goals" (R/N bps goal, R bps goal) approximately.
Polling: There's a master that polls who wants to speak
=> single point of failure, overhead
=> meets R/N bps goal
'2021 > October 2021' 카테고리의 다른 글
Hierarchical routing (iBGP, eBGP, OSPF, RIP) (0) | 2021.10.09 |
---|---|
Client and server architecture (0) | 2021.10.09 |
Modifiability, Security, Testability in Software Architecture (0) | 2021.10.09 |
Availability and Performance in Software Architecture (1) | 2021.10.08 |
PyQT5 simple, self-contained snippet for dialog popup (0) | 2021.10.06 |