[go: up one dir, main page]

0% found this document useful (0 votes)
1 views17 pages

Final suggestions dos- 605B

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 17

File System

1. Compare the transparency related design issues of a distributed file system to the design issues
of a distributed OS. Differentiate, with an example, between structured and unstructured files

Here is a comparison of the transparency related design issues of a distributed file system
to the design issues of a distributed OS:

Transparency Distributed File System Distributed OS

Access Users should be able to access Processes should be able to


Transparency files regardless of their location. access resources regardless of
their location.

Location Users should not be aware of Processes should not be aware of


Transparency the physical location of files. the physical location of resources.

Concurrency Multiple users should be able to Multiple processes should be able


Transparency access files concurrently to access resources concurrently
without interfering with each without interfering with each other.
other.

Failure Users should not be aware of Processes should not be aware of


Transparency failures in the system. failures in the system.

Here is an example of the difference between structured and unstructured files:

 Structured files are files that have a defined structure, such as a database table.
The structure is typically defined using a schema or data model, which specifies the
fields, their types, and any relationships or constraints between them. This makes it
easy to find and access specific data within the file. For example, a customer data-
base might have a table with columns for customer ID, name, address, and phone
number.
 Unstructured files do not have a defined structure. They do not follow a schema and
contain data that is not organized into fields or columns. Unstructured files can con-
sist of plain text, images, videos, audio recordings, documents, or any other form of
data without a strict organization. This makes it more difficult to find and access
specific data within the file. For example, a text file might contain a customer's
name, address, and phone number, but it would be difficult to find a specific piece of
information without reading through the entire file.
2. Short note on file access models in distributed file systems

File access models can be classified based on two parameters. They are

1. unit of data access


2. method used for accessing remote files
There are 4 file access models based on unit of data access:

 File-level transfer model: When the operation requires file data to be transferred
across the network in either direction between client and server, the whole file is
moved.

The advantages are:

1. Efficient as network protocol overhead is required only once.

2. Better scalability because as it requires fewer access to file server and reduce
server load and network traffic.

3. Disk access routines on server can be better optimized.

4. Offers degree of resiliency to server and network failure.

The disadvantage is it requires sufficient storage space.

Example are amoeba, CFS, Andrew file system

 Block-level transfer model: File data transfer take place in units of data blocks. A
file block is contiguous portion of file and fixed in length.

Advantage- it does not required large storage space.It also eliminates the need to
copy the entire file when only a small portion of the file data is needed.

Disadvantage – it has more network traffic and more network protocol overhead.
Therefore, this model has poor performance as compared to the file level transfer
model. Example are Sun microsystem’s NFS, Apollo domain file system.

 Byte-level transfer model: In this model, file data transfer take place across the
network in either direction between client and server take place in units of bytes. It
provides maximum flexibility but difficulty in cache management.
 Record-level transfer model: The three file data model discussed above are com-
monly suitable with unstructured file models. The record level transfer model is suit-
able with structured files. Here, file data transfer take place in unit of records. Ex-
ample RSS(research storage system).

There are 2 file access models based on method used for accessing remote files

 Remote access model: Here the client and server communicate with each other using
a request-response protocol. The client sends a request to the server, and the server
responds with the requested data or action. The remote service model is a popular
choice for building distributed applications. It allows developers to build applications
that are not tied to a specific platform or location.
 Data caching model: Here accessed data is stored frequently in a temporary location,
such as memory, to improve the performance of an application. When a user requests
data that is stored in the cache, the application can retrieve it quickly without having to
access the original data source.

The file access model that is used in a distributed file system depends on a number of
factors, including the size of the files, the network bandwidth, and the performance
requirements and will depend on the specific needs of the system.

3. Explain the difference between cache modification schemes used in distributed file system

In distributed file systems, cache modification schemes determine how modifications or updates to
files are handled in the cache of client machines. These schemes aim to balance the performance
benefits of caching frequently accessed data with the need to maintain consistency across multiple
clients. Here are two common cache modification schemes used in distributed file systems:
i. Write-through scheme
When a cache entry is modified, the new value is immediately sent to the server for updating the
master copy of the file.
Advantage:

 High degree of reliability and suitability for UNIX-like semantics.


 The risk of updated data getting lost in the event of a client crash is low.

Disadvantage:
Poor Write performance.
ii. Delayed-write scheme
To reduce network traffic for writes the delayed-write scheme is used. New data value is only
written to the cache when a entry is modified and all updated cache entries are sent to the
server at a later time.
There are three commonly used delayed-write approaches:

 Write on ejection from cache:

Modified data in cache is sent to server only when the cache-replacement policy has decided to
eject it from client’s cache. This can result in good performance but there can be a reliability
problem since some server data may be outdated for a long time.

 Periodic write:

The cache is scanned periodically and any cached data that has been modified since the last
scan is sent to the server.

 Write on close:
Modification to cached data is sent to the server when the client closes the file. This does not
help much in reducing network traffic for those files that are open for very short periods or are
rarely modified.

Advantages:
Write accesses complete more quickly that result in a performance gain.
Disadvantage:
 Reliability can be a problem

Here is a table that summarizes the key differences between write-through and delayed-write:

Feature Write-through Delayed-write

Data consistency Always up-to-date May be inconsistent

Network traffic Increased Reduced

Performance Can be slower Can be faster

Reliability More reliable Less reliable

4. Discuss with an example the difference between the function of a stateful and stateless file
server

Paramet- Stateful Stateless


ers

1. State A Stateful server remember cli- A Stateless server keeps no state in-
ent data (state) from one request formation
to the next.

2. Program- Stateful server is harder to code Stateless server is straightforward to


ming code
3.Effi- More Because clients do not have Less because information needs to
ciency to provide full file information be provided
every time they perform an oper-
ation

4.Crash re- Difficult due to loss of informa- Can easily recover from failure. Be-
covery tion cause there is no state that must be
restored

5.Informa- Using a Stateful file server, the Using a stateless file server, the cli-
tion trans- client can send less data with ent must, specify complete file
fer each request names in each request specify loca-
tion for reading or writing re-au-
thenticate for each request.

6.Extra ser- Stateful servers can also offer cli- It does not have to implement the
vices ents extra services such as file state accounting associated with
locking, and remember read and opening, closing, and locking of
write positions files.

7.Opera- Open, Read, Write, Seek, Close Read, Write


tions

8.Example
Synchronization
1. Give an example of a problem, with explanation, with process execution in distributed OS due
to clock drift

Clock drift is a common issue in distributed operating systems where multiple machines are
interconnected and collaborate on executing processes. It occurs when the clocks on different
machines deviate from real-time synchronization, resulting in time discrepancies between them.
This can lead to several problems, one of which is the incorrect ordering of events during process
execution.
Let's consider an example scenario with a distributed system consisting of two machines, Machine
A and Machine B. Both machines are executing processes that involve exchanging messages to
complete a task. However, due to clock drift, the clocks on Machine A and Machine B start to drift
apart.
Now, suppose Process X on Machine A sends a message to Process Y on Machine B. The timestamp
attached to the message is based on Machine A's local clock. However, due to the clock drift,
Machine B's clock may be slightly ahead or behind Machine A's clock.
If the clocks were perfectly synchronized, Machine B would receive the message from Process X
with a timestamp that corresponds to the order of events. However, in the presence of clock drift,
Machine B might receive the message out of order due to the time differences between the
machines.
For instance, let's say Machine B's clock is running slightly ahead of Machine A's clock due to
clock drift. As a result, Machine B might receive a message from Process X with a timestamp that is
earlier than the timestamp of a message it has already received from another process. This can lead
to incorrect ordering of events and disrupt the intended execution of the distributed system.
The consequences of such clock drift-related problems can vary depending on the specific
application and the actions taken by the distributed system. In some cases, it can lead to inconsistent
data, race conditions, or incorrect decision-making based on event ordering.

2. Discuss the Global averaging distributed algorithm of physical clock synchronization

The Global Averaging algorithm is a distributed algorithm used for physical clock synchronization
in distributed systems. It aims to reduce clock drift and achieve a more accurate synchronization
across multiple machines.It is widely used because it is easy to implement and does not require any
special hardware or software. However, it is important to be aware of its limitations, such as the
need for all nodes to be able to communicate with each other and the susceptibility to clock drift.

 In this approach the clock process at each node broadcasts its local clock time in the form
of a “resync” message at the beginning of every fixed-length resynchronization interval.
This is done when its local time equals To+iR for some integer i, where To is a fixed time
agreed by all nodes and R is a system parameter that depends on total nodes in a system.
 After broadcasting the clock value, the clock process of a node waits for time T which is
determined by the algorithm.
 During this waiting the clock process collects the resync messages and the clock process
records the time when the message is received which estimates the skew after the waiting
is done. It then computes a fault-tolerant average of the estimated skew and uses it to
correct the clocks

3. Discuss how a centralized algorithm of implementing mutual exclusion of critical section in


distributed OS is similar to what is done for traditional OS.

A centralized algorithm for implementing mutual exclusion in a distributed OS is similar to


what is done for traditional OS in the following ways:
 In centralized algorithm one process is selected as the coordinator which may be the ma-
chine with the highest network address.

 When any process wants to enter a critical section, it sends a request message to the co-
ordinator stating which critical section it wants to access. „
 If no other process is currently in that critical section, the coordinator sends back a reply
granting permission. When the reply arrives, the requesting process enters the critical
section.
 If another process requests access to the same critical section, it is ignored or blocked
until the first process exits the critical section and sends a message to the coordinator
stating that it has exited.

 Advantages:
o Algorithm guarantees mutual exclusion by letting one process at a time into each
critical region.
o It is also fair as requests are granted in the order in which they are received.
o No process ever waits forever so no starvation.
o Easy to implement so it requires only three messages per use of a critical region
(request, grant, release).
o Used for more general resource allocation rather than just managing critical re-
gions.
 Disadvantages:
o The coordinator is a single point of failure, the entire system may go down if it
crashes.
o If processes normally block after making a request, they cannot distinguish a
dead coordinator from ‘‘permission denied’’ since no message comes back.
o In a large system a single coordinator can become a performance bottleneck.
4. Explain the Ricart-Agrawala/ Suzuki-Kasami’s algorithm for mutual exclusion in a distributed
system. Distinguish it from the Suzuki-Kasami’s/ Ricart-Agrawala’s algorithm for mutual
exclusion in distributed system.

The following table summarizes the key differences between the two algorithms:

Feature Ricart-Agrawala Suzuki-Kasami

Centralized or decentralized Centralized Decentralized

Message overhead Low High

Number of messages per critical section access 2(N-1) N(N-1)


Complexity O(N^2) O(N^3)

The Ricart-Agrawala algorithm is typically used in systems where communication overhead is a


concern, such as embedded systems. The Suzuki-Kasami algorithm is typically used in systems
where scalability is a concern, such as large-scale distributed systems.

5. What are the problems of handling deadlock in a distributed OS. What are the deadlock
handling strategies in distributed OS.

Deadlock is a situation in which a group of processes are blocked, each waiting for a resource that
is held by another process in the group. Deadlocks can occur in distributed systems, where
processes may be running on different computers and communicating with each other over a
network.
There are several problems with handling deadlock in a distributed OS. Handling deadlock in a
distributed operating system poses several challenges due to the distributed nature of the system.
Some of the problems and complexities involved in dealing with deadlock in a distributed OS are:
1. Lack of Global State: Distributed systems often lack a centralized authority or global state that
provides a complete view of the system. The absence of global state makes it challenging to detect
and resolve deadlocks since information about the state of all processes and resources may not be
readily available.
2. Message Delays and Failures: In a distributed OS, message delays, losses, or failures are
common due to network issues. These communication problems can lead to difficulties in
accurately detecting deadlocks and coordinating deadlock resolution actions.
3. Scalability: Distributed systems can involve a large number of processes and resources, which
increases the complexity of deadlock detection and resolution. Scaling deadlock handling strategies
to accommodate a large number of participants can be a significant challenge.
4. Synchronization Overhead: Deadlock handling strategies often require synchronization and
coordination among processes to detect and resolve deadlocks. Implementing these mechanisms in
a distributed environment can introduce additional overhead and performance implications.

There are several strategies for handling deadlock in a distributed OS.


Here are some of the most common deadlock handling strategies in distributed OS:

 There are three strategies for handling deadlocks, viz., deadlock prevention, deadlock
avoidance, and deadlock detection.
 Handling of deadlock becomes highly complicated in distributed systems because no site
has accurate knowledge of the current state of the system and because every inter-site
communication involves a finite and unpredictable delay.
 Deadlock prevention is commonly achieved either by having a process acquire all the
needed resources simultaneously before it begins executing or by pre-empting a process
which holds the needed resource.
 This approach is highly inefficient and impractical in distributed systems. In deadlock
avoidance approach to distributed systems, a resource is granted to a process if the resulting
global system state is safe (note that a global state includes all the processes and resources of
the distributed system).
 However, due to several problems, deadlock avoidance is impractical in distributed systems.
 Deadlock detection requires examination of the status of process-resource interactions for
presence of cyclic wait.
 Deadlock detection in distributed systems seems to be the best approach to handle
deadlocks in distributed systems.
 Deadlock handling using the approach of deadlock detection entails addressing two basic
issues: First, detection of existing deadlocks and second resolution of detected deadlocks.
 Detection of deadlocks involves addressing two issues: Maintenance of the WFG and
searching of the WFG for the presence of cycles (or knots).

1. Short Notes- Threads, Processor pool model, Idle workstation model

Threads:
 Threads are lightweight units of execution within a process. They are sometimes
referred to as lightweight processes.
 Threads share the same memory space as the process they belong to, allowing
them to access and modify the same data.
 Multiple threads within a process can execute concurrently, providing benefits such
as improved performance and responsiveness.
 Threads can be managed by the operating system or by a thread library provided
by a programming language or framework.
 Threads can communicate with each other through shared memory or
synchronization mechanisms like locks, semaphores, and condition variables.
 Common uses of threads include parallelizing tasks, handling I/O operations, and
improving user interface responsiveness.

Processor Pool Model:

 The processor-pool model is based on the observation that most of the time a user does
not need any computing power but once in a while the user may need a very large
amount of computing power for a short time.
 Therefore, unlike the workstation-server model in which a processor is allocated to each
user, in processor-pool model the processors are pooled together to be shared by the
users as needed.
 The pool of processors consists of a large number of microcomputers & minicomputers
attached to the network.
 Each processor in the pool has its own memory to load & run a system program or an ap-
plication program of the distributed computing system.
 In this model no home machine is present & the user does not log onto any machine.
 This model has better utilization of processing power & greater flexibility.
 Example:Amoeba & the Cambridge Distributed Computing System.
Idle Workstation Model:

 A distributed computing system based on the workstation model consists of several


workstations interconnected by a communication network.
 An organization may have several workstations located throughout an infrastructure
were each workstation is equipped with its own disk & serves as a single-user computer.
 In such an environment,at any one time a significant proportion of the workstations are
idle which results in the waste of large amounts of CPU time.
 Therefore,the idea of the workstation model is to interconnect all these workstations by
a high-speed LAN so that idle workstations may be used to process jobs of users who are
logged onto other workstations & do not have sufficient processing power at their own
workstations to get their jobs processed efficiently.
 Example: Sprite system & Xerox PARC.

Message Passing and RPC


1. Client-Server computing
In client server computing, the clients requests a resource and the server
provides that resource. A server may serve multiple clients at the same time
while a client is in contact with only one server. Both the client and server
usually communicate via a computer network but sometimes they may reside
in the same system.
Characteristics of Client Server Computing:
The salient points for client server computing are as follows:

 The client server computing works with a system of request and response. The client sends a
request to the server and the server responds with the desired information.
 The client and server should follow a common communication protocol so they can easily
interact with each other. All the communication protocols are available at the application
layer.
 A server can only accommodate a limited number of client requests at a time. So it uses a
system based to priority to respond to the requests.
 Denial of Service attacks hindera servers ability to respond to authentic client requests by
inundating it with false requests.
 An example of a client server computing system is a web server. It returns the web pages to
the clients that requested them.

The client–server model is the most widely used computing model today. It is used in a wide variety
of applications, including:
 Web applications
 Database applications
 Enterprise resource planning (ERP) systems
 Customer relationship management (CRM) systems

2. Parameter Marshalling in RPC

• Marshalling is the packing of procedure parameters into a message packet.


• The RPC stubs call type-specific procedures to marshall (or unmarshall) all of the
parameters to the call.
• On the client side, the client stub marshalls the parameters into the call packet; on the
server side the server stub unmarshalls the parameters in order to call the server’s
procedure.
• On the return, the server stub marshalls return parameters into the return packet; the
client stub unmarshalls return parameters and returns to the client.

Marshalling involves the following actions −

 First the argument of client process or the result of a server process is taken which will form
the message data to be sent to remote process.

 Encode the message data of step 1 on the sender’s computer. The conversion of program
objects into a stream form that is suitable for transmission will be involved by this encoding
process.

3. Synchronous and Asynchronous message passing

Synchronous and asynchronous message passing are two different communication models used in
concurrent and distributed systems. They describe how messages are sent and received between
different components or processes.
Synchronous message passing involves one entity (usually a client) in the message passing
process sending a message and a second entity (usually a server) receiving it, carrying out some
processing and then sending back some response which the first entity processes in some way.
While the second entity is carrying out the processing the first entity pauses waiting for the
response.

In asynchronous message passing each entity in the process does not have to wait for the next part
of the dialogue they are engaged in and can carry out some other task. For example, the server could
be carrying out some processor-intensive task for another service which it provides. This form of
message passing, where there is no close coordination between message passing entities, is known
as asynchronous message passing.
Introductory matters
1. Discuss the five layers of TCP/IP protocol stack.

The TCP/IP protocol stack is a conceptual model that describes how data is exchanged between
computers over a network. It consists of five layers:
 Physical layer: The physical layer is the lowest layer of the TCP/IP protocol stack. It is
responsible for the physical transmission of data over a network medium, such as a cable,
wireless signal, or satellite link.
 Data link layer: The data link layer is responsible for error detection and correction on the
physical layer. It also provides for flow control, which ensures that data is not transmitted
too quickly for the receiving device to handle and provides a reliable connection between
two devices on a network.
 Network layer: The network layer is responsible for routing data packets across a network. It
uses a routing protocol to determine the best path for a data packet to travel from its source
to its destination. It uses a routing protocol to determine the best path for a data packet to
travel from its source to its destination.
 Transport layer: The transport layer provides a reliable connection between two
applications. It ensures that data is delivered in the correct order and that any errors are
corrected. The transport layer provides a reliable connection between two applications. It
ensures that data is delivered in the correct order and that any errors are corrected. It also
provides a variety of services to applications, such as flow control and congestion control.
 Application layer: The application layer is the highest layer of the TCP/IP protocol stack. It
provides services to applications, such as file transfer, email, and web browsing for
communication with each other.
Each layer of the TCP/IP protocol stack performs a specific function, and the layers work together
to ensure that data is transmitted and received reliably.The TCP/IP protocol stack is a flexible and
scalable model that can be used to support a wide variety of network applications. It is the most
widely used protocol stack in the world, and it is used by billions of devices every day.

2. Compare the design issues of a distributed file system to the design issues of a distributed
OS.

Distributed file systems and distributed operating systems are both complex systems that must deal
with a variety of challenges. However, there are some key differences in the design issues that each
type of system must address.
Distributed file systems must deal with the following issues:
1.Heterogeneity: Heterogeneity is applied to the network, computer hardware, operating
system and implementation of different developers. A key component of the heterogeneous
distributed system client-server environment is middleware. Middleware is a set of services
that enables application and end-user to interacts with each other across a heterogeneous
distributed system.
2. Openness: The openness of the distributed system is determined primarily by the degree
to which new resource-sharing services can be made available to the users. Open systems
are characterized by the fact that their key interfaces are published. It is based on a uniform
communication mechanism and published interface for access to shared resources. It can be
constructed from heterogeneous hardware and software.
3. Scalability: Scalability of the system should remain efficient even with a significant
increase in the number of users and resources connected. It shouldn’t matter if a programme
has 10 or 100 nodes; performance shouldn’t vary. A distributed system’s scaling requires
consideration of a number of elements, including size, geography, and management.
4. Security: Security of information system has three components Confidentially, integrity
and availability. Encryption protects shared resources, keeps sensitive information secrets
when transmitted.
5. Failure Handling: When some faults occur in hardware and the software program, it may
produce incorrect results or they may stop before they have completed the intended
computation so corrective measures should to implemented to handle this case. Failure
handling is difficult in distributed systems because the failure is partial i, e, some
components fail while others continue to function.
6. Concurrency: There is a possibility that several clients will attempt to access a shared
resource at the same time. Multiple users make requests on the same resources, i.e read,
write, and update. Each resource must be safe in a concurrent environment. Any object that
represents a shared resource in a distributed system must ensure that it operates correctly in
a concurrent environment.
7. Transparency : Transparency ensures that the distributes system should be perceived as a
single entity by the users or the application programmers rather than the collection of
autonomous systems, which is cooperating. The user should be unaware of where the
services are located and the transferring from a local machine to a remote one should be
transparent.

3. Flynn’s classification of computer architecture

Flynn’s taxonomy is a classification of computer architectures proposed by Michael J. Flynn in


1966 1. The taxonomy is based on the number of instruction streams and data streams that
can be processed simultaneously by a computer architecture. There are four categories in
Flynn’s taxonomy:

1. Single Instruction Single Data (SISD): This category represents the traditional uni-
processor systems where a single stream of instructions is executed on a single stream
of data 2.

2. Single Instruction Multiple Data (SIMD): This category represents the parallel pro-
cessing systems where a single instruction is executed on multiple data streams 2.
3. Multiple Instruction Single Data (MISD): This category represents the systems
where multiple instructions are executed on a single stream of data 2.

4. Multiple Instruction Multiple Data (MIMD): This category represents the parallel
processing systems where multiple instructions are executed on multiple data

streams .
2

4. Need for transparency in distributed systems

Transparency in distributed systems is important for a number of reasons, including:


 Simplicity: Transparency makes distributed systems easier to use and manage. Users should
not have to worry about the underlying complexity of the system, such as how the
components are connected or how the data is stored.
 Scalability: Transparency allows distributed systems to scale to larger sizes. As the system
grows, the transparency mechanisms can help to ensure that the system continues to operate
efficiently and reliably.
 Reliability: Transparency can help to improve the reliability of distributed systems. By
hiding the details of the system from users, transparency can help to prevent errors and
failures.
 Security: Transparency can help to improve the security of distributed systems. By hiding
the details of the system from users, transparency can help to prevent unauthorized access to
data and resources.
There are a number of different types of transparency that can be implemented in distributed
systems. Some of the most common types of transparency include:
 Access transparency
 Location transparency
 Replication transparency
 Concurrency transparency
Failure transparency
Mobility Transparency
Performance Transparency
Scaling Transparency
Transparency is an important design goal for distributed systems. By implementing transparency,
system designers can make distributed systems easier to use, manage, scale, and secure.

5. Tightly & loosely coupled system

Loosely Coupled Multiprocessor System:


It is a type of multiprocessing system in which, There is distributed memory instead of shared memory. In loosely
coupled multiprocessor system, data rate is low rather than tightly coupled multiprocessor system. In loosely coupled
multiprocessor system, modules are connected through MTS (Message transfer system) network.

Tightly Coupled Multiprocessor System:


It is a type of multiprocessing system in which, There is shared memory. In tightly coupled multiprocessor system, data
rate is high rather than loosely coupled multiprocessor system. In tightly coupled multiprocessor system, modules are
connected through PMIN, IOPIN and ISIN networks.

Let’s study the difference between loosely coupled and tightly coupled multiprocessor system:
S.N
O Loosely Coupled Tightly Coupled

There is distributed memory in loosely coupled There is shared memory, in tightly coupled
1.
multiprocessor system. multiprocessor system.

Loosely Coupled Multiprocessor System has low data Tightly coupled multiprocessor system has high
2.
rate. data rate.

The cost of loosely coupled multiprocessor system is Tightly coupled multiprocessor system is more
3.
less. costly.

In loosely coupled multiprocessor system, modules are


4. While there is PMIN, IOPIN and ISIN networks.
connected through Message transfer system network.

In loosely coupled multiprocessor, Memory conflicts While tightly coupled multiprocessor system have
5.
don’t take place. memory conflicts.

Loosely Coupled Multiprocessor system has low Tightly Coupled multiprocessor system has high
6.
degree of interaction between tasks. degree of interaction between tasks.

While in tightly coupled multiprocessor, IOPIN


In loosely coupled multiprocessor, there is direct
7. helps connection between processor and I/O
connection between processor and I/O devices.
devices.

Applications of loosely coupled multiprocessor are in Applications of tightly coupled multiprocessor are
8.
distributed computing systems. in parallel processing systems.

6. Private key & public key numerical


🕊
🕪
🕚🖊🔪

You might also like