[go: up one dir, main page]

0% found this document useful (0 votes)
12 views286 pages

VOL 2 Python Design Pattern March 1 2025

Uploaded by

bdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views286 pages

VOL 2 Python Design Pattern March 1 2025

Uploaded by

bdo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 286

Book_Into.

md 2025-03-01

📖 This book is focused 80% on Code examples and 20% on Theory and this very large work across 1300+
pages will make you a Pro Python Engineer, if you sincerely complete the book. Almost all the code
example, you will face in daily life as a Python Engineer and / or taken from actual Interview situation - that
I have meticulously collected from my many years in the Industry as a Software Engineer.

For each and every code example, I analyze the code's output, with very detailed "under-the-hood"
explanations of "How" and "Why" about the output.

👉 If you really understand and master these few core fundamentals of a Language, you will get the
complete mental framework to understand and execute most real-life coding situations.

👉 Hence, I wrote this book to give you a solid grip on the core principles of Python that will be needed as a
real-life Python Engineer on a daily basis.

👉 And also these are the topics, favorite for Interviewers to ask you to check your understanding of Core
Python. If you can talk about these topics for half an hour, in most cases you will win the job.

🚀 Enjoy the learning and wish you the best.🐍


You can connect with me in the following platforms

💡 My Daily Newsleter on actionable AI (no-noise, 7 emails for 7 days)


🐦 TWITTER (@rohanpaul_ai) (61K+ Followers)
🟠 My Machine Learning YouTube Channel - @RohanPaul-AI (13K+ Subscribers)
LINKEDIN (34K+ Followers)

Kaggle (I am a Kaggle Master)

GITHUB (1K+ Followers)

1/1
- Bridge Design Pattern in Python

- Mediator Design pattern in Python

- Prototype Design Pattern in Python

- The Abstract Factory pattern in Python

- The builder design pattern in Python

- Chain of Responsibility Design pattern

- Command Design Pattern in Python.

- The Facade design pattern in Python

- Flyweight design pattern in Python

- Observer Design Pattern in Python

- Adapter Design Pattern

- The factory method in Python is based on a single function that's


written to handle our object creation task

- Proxy Design Pattern in Python

- The Singleton Design Pattern in Python

- State Design Pattern in Python

- Strategy Pattern in Python

- Template Design Pattern

- Repository Pattern in Python


1
🐍🚀 Bridge Design Pattern in Python 🐍🚀

While the adapter pattern is used to make unrelated classes work together, the bridge pattern is
designed upfront to decouple an implementation from its abstraction.

Using the bridge pattern is a good idea when you want to share an implementation among
multiple objects. Basically, instead of implementing several specialized classes, defining all that is
required within each class, you can define the following special components:

• An abstraction that applies to all the classes • A separate interface for the different objects
involved

📌 In the Bridge Design Pattern the main goal is to separate the high-level logic from the low-level
operations.

📌 Use Cases for the Bridge Design Pattern: 1. When you want to avoid a permanent binding
between an abstraction and its implementation. This is especially useful if the implementation
must be selected or switched at runtime.

2. When both the abstractions and their implementations should be extensible by subclassing.
With the Bridge pattern, you can combine different abstractions and implementations and
extend them independently.

3. When you want to hide the implementation of an abstraction completely from clients. This is
beneficial when you need to share an implementation among multiple objects.

📌 Components of the Bridge Pattern:


1. Abstraction: This defines the abstraction's interface and maintains a reference to an object
of type Implementor.

2. RefinedAbstraction: This is a subclass of Abstraction and represents the refined


abstractions.

3. Implementor: This is the interface for the operations. Concrete classes implement this
interface.

2
4. ConcreteImplementor: This is a subclass of Implementor and implements the concrete
operations.

Let's take a Conceptual Example


This example illustrates the structure of the Bridge design pattern. It focuses on answering these
questions:

What classes does it consist of?

What roles do these classes play?

In what way the elements of the pattern are related?

from __future__ import annotations


from abc import ABC, abstractmethod

class Abstraction:
def __init__(self, implementation: Implementation) -> None:
self.implementation = implementation

def operation(self) -> str:


return (f"Abstraction: Base operation with:\n"
f"{self.implementation.operation_implementation()}")

class ExtendedAbstraction(Abstraction):
def operation(self) -> str:
return (f"ExtendedAbstraction: Extended operation with:\n"
f"{self.implementation.operation_implementation()}")

class Implementation(ABC):
@abstractmethod
def operation_implementation(self) -> str:
pass

class ConcreteImplementationA(Implementation):
def operation_implementation(self) -> str:
return "ConcreteImplementationA: Here's the result on the platform A."

class ConcreteImplementationB(Implementation):
def operation_implementation(self) -> str:
return "ConcreteImplementationB: Here's the result on the platform B."

def client_code(abstraction: Abstraction) -> None:


# ...

print(abstraction.operation(), end="")

# ...

3
if __name__ == "__main__":

implementation = ConcreteImplementationA()
abstraction = Abstraction(implementation)
client_code(abstraction)

print("\n")

implementation = ConcreteImplementationB()
abstraction = ExtendedAbstraction(implementation)
client_code(abstraction)

Alright, now let's go step by step into understanding the above


conceptual example of the Bridge Design Pattern.

📌 Abstraction and Implementation: The core idea behind the Bridge pattern is to separate the
abstraction from its implementation so that both can evolve independently. In the provided code:
- Abstraction and ExtendedAbstraction represent the abstraction side. - Implementation
(and its concrete classes ConcreteImplementationA and ConcreteImplementationB ) represent
the implementation side.

📌 Abstraction Class: The Abstraction class defines an interface that maintains a reference to
an Implementation object. This reference allows the Abstraction to delegate the real work to
the Implementation object. The operation method in Abstraction is a higher-level method
that uses the primitive operation provided by the Implementation .

📌 ExtendedAbstraction Class: The ExtendedAbstraction class is a refined version of the


Abstraction . It extends the base Abstraction and can override its methods or introduce new
ones. In this example, it overrides the operation method to provide a different message but still
relies on the Implementation for the core functionality.

📌 Implementation Interface: The Implementation class is an abstract class (or interface) that
defines the method operation_implementation . This method is the primitive operation that all
concrete implementations must provide. The Abstraction relies on this method to perform its
tasks.

📌 Concrete Implementations: ConcreteImplementationA and ConcreteImplementationB are


concrete classes that implement the Implementation interface. They provide the actual logic for
the operation_implementation method. In this example, they return messages indicating they
are operating on different platforms (Platform A and Platform B).

📌 Client Code and Composition: The client_code function demonstrates how the abstraction
works with the implementation. It takes an Abstraction object and calls its operation method.
The main point here is that the client code is decoupled from the specific implementation. It works
with any combination of Abstraction and Implementation .

In the if __name__ == "__main__": block, two scenarios are demonstrated: 1. A basic


Abstraction is paired with ConcreteImplementationA . 2. An ExtendedAbstraction is paired
with ConcreteImplementationB .

In both cases, the client code remains unchanged, demonstrating the flexibility and decoupling
achieved by the Bridge pattern.

4
📌 Under-the-hood Theory: The Bridge pattern's primary goal is to decouple an abstraction from
its implementation, allowing both to vary independently. This is achieved by establishing a bridge
between the two hierarchies. In traditional OOP, we might have used inheritance to bind an
abstraction to its implementation. However, this tight coupling can lead to problems in large
systems, making them less flexible and harder to modify. The Bridge pattern addresses this by
using composition instead of inheritance. The abstraction maintains a reference to the
implementation, and any refined abstractions can work with any implementation, allowing for a
Cartesian product of combinations.

📌 Conclusion: The provided code effectively demonstrates the Bridge Design Pattern by
decoupling the abstraction ( Abstraction and ExtendedAbstraction ) from its implementation
( ConcreteImplementationA and ConcreteImplementationB ). This decoupling is achieved
through composition, where the abstraction maintains a reference to the implementation. The
client code, as a result, can work with any combination of abstraction and implementation,
showcasing the flexibility of the pattern.

Let's dive into the Bridge Design Pattern in Python.


📌 First, let's consider a scenario where we have a set of shapes and we want to render them in
different ways (e.g., using different rendering engines). Without the Bridge Pattern, we might end
up with a combinatorial explosion of classes.

Code without the Bridge Design Pattern:

class CircleRenderer:
def render(self, radius):
return f"Drawing a circle of radius {radius}"

class SquareRenderer:
def render(self, side):
return f"Drawing a square with side {side}"

class Circle:
def __init__(self, radius):
self.radius = radius
self.renderer = CircleRenderer()

def draw(self):
return self.renderer.render(self.radius)

class Square:
def __init__(self, side):
self.side = side
self.renderer = SquareRenderer()

def draw(self):
return self.renderer.render(self.side)

📌 Issues with the above code:


1. If we want to add another rendering method (e.g., a 3D renderer), we would need to create
new classes for each shape-renderer combination.
5
2. The shape classes are tightly coupled with the renderer classes. This violates the Single
Responsibility Principle and makes the code harder to maintain and extend.

📌 Now, let's refactor the code using the Bridge Design Pattern:

Code with the Bridge Design Pattern:

# Implementor
class Renderer:
def render_circle(self, radius):
pass

def render_square(self, side):


pass

# Concrete Implementor
class VectorRenderer(Renderer):
def render_circle(self, radius):
return f"Drawing a vector circle of radius {radius}"

def render_square(self, side):


return f"Drawing a vector square with side {side}"

# Concrete Implementor
class RasterRenderer(Renderer):
def render_circle(self, radius):
return f"Drawing a raster circle of radius {radius}"

def render_square(self, side):


return f"Drawing a raster square with side {side}"

# Abstraction
class Shape:
def __init__(self, renderer):
self.renderer = renderer

# Refined Abstraction
class Circle(Shape):
def __init__(self, renderer, radius):
super().__init__(renderer)
self.radius = radius

def draw(self):
return self.renderer.render_circle(self.radius)

# Refined Abstraction
class Square(Shape):
def __init__(self, renderer, side):
super().__init__(renderer)
self.side = side

def draw(self):
return self.renderer.render_square(self.side)

📌 Explanation:
6
1. We have an Implementor ( Renderer ) that provides the interface for the rendering
operations.

2. We have two Concrete Implementors ( VectorRenderer and RasterRenderer ) that provide


concrete implementations for the rendering operations.

3. We have an Abstraction ( Shape ) that maintains a reference to an object of type


Implementor .

4. We have two Refined Abstractions ( Circle and Square ) that extend the Shape class and
use the Renderer to perform their drawing operations.

📌 Benefits:
1. Shapes and renderers are decoupled. We can easily add a new shape or a new renderer
without modifying the existing code.

2. The code is more maintainable and flexible. We can switch rendering methods on the fly.

3. We avoid the combinatorial explosion of classes.

📌 The Bridge Pattern allows us to separate the abstraction from its implementation, giving us the
flexibility to vary them independently. This is achieved by establishing a bridge between the two.
In our example, the bridge is the relationship between the Shape and the Renderer .

Let's delve deep into how the refactored code using the
Bridge Design Pattern addresses the issues present in the
original code.
📌 Issue 1: Combinatorial Explosion of Classes
In the original code, for every new shape or rendering method we introduce, we would need to
create a new class. This means if we had 5 shapes and 3 rendering methods, we would end up
with 15 different classes (5 shapes x 3 renderers). This is not scalable and would quickly become
unmanageable as the number of shapes or renderers grows.

📌 Solution with Bridge Pattern:


The Bridge Pattern separates the abstraction (shapes) from its implementation (renderers). This
means we can independently add new shapes or renderers without having to create a new class
for every combination.

In the refactored code, if we want to add a new shape, we simply create a new class for that
shape, inheriting from the Shape abstraction. If we want to add a new rendering method, we
create a new renderer class inheriting from the Renderer interface. This way, the number of
classes grows linearly with the number of shapes or renderers, not exponentially.

📌 Issue 2: Tight Coupling between Shapes and Renderers


In the original code, each shape was tightly coupled with a specific renderer. This means if we
wanted to change the rendering method for a shape, we would need to modify the shape's class
or create a new one. This violates the Single Responsibility Principle, where a class should have
only one reason to change.

📌 Solution with Bridge Pattern:

7
The Bridge Pattern promotes loose coupling. In the refactored code, shapes and renderers are
decoupled. The Shape class (and its subclasses) doesn't directly call a specific rendering method.
Instead, it delegates the rendering to the Renderer interface. This allows us to easily switch
rendering methods on the fly without modifying the shape classes.

For instance, if we want a Circle to be rendered using a RasterRenderer instead of a


VectorRenderer , we simply pass an instance of the RasterRenderer to the Circle during its
initialization. The Circle doesn't need to know the specifics of how it's being rendered; it just
knows it can call the render_circle method on its renderer.

📌 Overall Benefits of the Bridge Pattern in this Scenario:


1. Scalability: We can easily introduce new shapes or renderers without a combinatorial
explosion of classes.

2. Flexibility: We can switch rendering methods for a shape without modifying its class.

3. Maintainability: With a clear separation of concerns, the code is easier to maintain. If there's
a change in how a shape is defined or how a rendering method works, it's isolated to that
specific class and doesn't affect others.

4. Reusability: Renderers can be reused across different shapes, and shapes can be rendered
using any renderer, promoting code reusability.

In essence, the Bridge Pattern has transformed a rigid structure into a flexible one, where shapes
and renderers can evolve independently without affecting each other.

📌 Example - 1 Real-life Use Case: Let's consider a scenario where we have different types of
devices (like a TV or a Radio) and different types of remote controls (like a Basic Remote or an
Advanced Remote). The Bridge pattern can be used to decouple the device type from the remote
type.

# Implementor: Device
class Device:
def turn_on(self):
pass

def turn_off(self):
pass

# ConcreteImplementor: TV
class TV(Device):
def turn_on(self):
print("Turning on the TV")

def turn_off(self):
print("Turning off the TV")

# ConcreteImplementor: Radio
class Radio(Device):
def turn_on(self):
print("Turning on the Radio")

def turn_off(self):
print("Turning off the Radio")

# Abstraction: RemoteControl
8
class RemoteControl:
def __init__(self, device):
self.device = device

def operate(self):
pass

# RefinedAbstraction: BasicRemote
class BasicRemote(RemoteControl):
def operate(self):
print("Using Basic Remote:")
self.device.turn_on()
self.device.turn_off()

# RefinedAbstraction: AdvancedRemote
class AdvancedRemote(RemoteControl):
def operate(self):
print("Using Advanced Remote:")
self.device.turn_on()
print("Setting volume to maximum using Advanced Remote")
self.device.turn_off()

# Client Code
tv = TV()
basic_remote = BasicRemote(tv)
basic_remote.operate()

radio = Radio()
advanced_remote = AdvancedRemote(radio)
advanced_remote.operate()

And the output is

Using Basic Remote:


Turning on the TV
Turning off the TV
Using Advanced Remote:
Turning on the Radio
Setting volume to maximum using Advanced Remote
Turning off the Radio

📌 Description of the Example Code: 1. We have a Device interface (Implementor) with


methods turn_on and turn_off . 2. TV and Radio are concrete devices
(ConcreteImplementors) that implement the Device interface. 3. RemoteControl is an
abstraction that has a reference to a device. 4. BasicRemote and AdvancedRemote are refined
abstractions that extend the RemoteControl . They provide different ways to operate devices. 5. In
the client code, we create a TV and a Radio device. We then use a Basic Remote to operate the TV
and an Advanced Remote to operate the Radio.

📌 The beauty of this design is that if we introduce a new device or a new type of remote, we
don't need to change the existing classes. We can simply extend the appropriate abstraction or
implementor. This ensures that our code remains open for extension but closed for modification,
adhering to the Open-Closed Principle.

9
📌 Under-the-hood: The Bridge Pattern is essentially about preferring composition over
inheritance. Instead of having a monolithic hierarchy of classes where every combination of
attributes is represented as a class, you break down the attributes into separate hierarchies and
combine them using composition. This leads to a more modular, scalable, and maintainable
design.

Let's break down the provided code and see how it adheres
to the principles and requirements of the Bridge Design
Pattern:
📌 Decoupling Abstraction from Implementation: - Abstraction: The RemoteControl class
serves as the abstraction. It provides a high-level interface ( operate ) for all remote controls.

# Abstraction: RemoteControl
class RemoteControl:
def __init__(self, device):
self.device = device

def operate(self):
pass

Implementation: The Device class is the implementation. It provides a basic interface for
all devices with methods like turn_on and turn_off .

# Implementor: Device
class Device:
def turn_on(self):
pass

def turn_off(self):
pass

In the code, the RemoteControl doesn't directly implement the device operations but instead has
a reference to a Device . This decouples the remote control's operations from the device's
operations.

📌 Variation in Abstractions and Implementations: - Refined Abstractions: We have


BasicRemote and AdvancedRemote as refined abstractions. They extend the base abstraction
( RemoteControl ) and provide specialized ways to operate devices. - Concrete Implementations:
We have TV and Radio as concrete implementations. They provide the actual implementation for
the device operations.

The code allows both the remote controls and the devices to vary independently. For instance, you
can introduce a new type of remote or a new device without altering the existing classes.

📌 Sharing of Implementation: The RemoteControl abstraction maintains a reference to the


Device (implementation). This means that any refined abstraction (like BasicRemote or
AdvancedRemote ) can work with any device without having the device logic embedded within
them. This promotes sharing of the implementation.

10
📌 Separate Interface for Different Objects: - The Device class provides a separate interface
for devices with methods like turn_on and turn_off . - The RemoteControl class provides a
separate interface for remote controls with the operate method.

This ensures that the responsibilities and operations of remote controls and devices are clearly
separated and can evolve independently.

📌 Extensibility: The design is extensible in both the abstraction and implementation


dimensions: - If you want to introduce a new type of remote, you can simply create a new class
extending RemoteControl . - If you want to introduce a new device, you can create a new class
implementing the Device interface.

In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.

📌 Composition Over Inheritance: Instead of inheriting device behavior in the remote control
classes, the behavior is composed using a reference to the Device object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.

In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( RemoteControl and its variants) from their implementations ( Device and its
variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.

Is there no need to provide @abstractmethod to the Device


class
📌 Note that, in many design pattern implementations, especially when using abstract base
classes, the @abstractmethod decorator is used to ensure that derived classes provide concrete
implementations of the abstract methods. This is a feature provided by the abc module in
Python.

In the context of the Bridge Design Pattern, if you want to enforce that every subclass of Device
must provide an implementation for turn_on and turn_off , then you should use the
@abstractmethod decorator.

Here's how you can modify the Device class to use @abstractmethod :

from abc import ABC, abstractmethod

class Device(ABC):

@abstractmethod
def turn_on(self):
pass

@abstractmethod
def turn_off(self):
pass

By doing this, if someone tries to create an instance of a subclass of Device that doesn't
implement turn_on or turn_off , Python will raise a TypeError .

11
📌 However, it's worth noting that the use of @abstractmethod is not strictly necessary for the
Bridge Design Pattern itself. It's more about enforcing a contract for subclasses in Python.
Whether or not to use it depends on the design goals. If you want to ensure that every device has
these methods implemented, then it's a good idea to use it. If you're okay with having default
implementations or just want to provide a template without enforcing its complete
implementation, then you might skip it.

How the Bridge Pattern prevents what's called the cartesian


product complexity explosion.
Alright, let's delve deep into the concept of how the Bridge Pattern prevents the cartesian product
complexity explosion.

📌 Understanding Cartesian Product Complexity Explosion: When designing object-oriented


systems, one might be tempted to use inheritance to represent every combination of
characteristics that an entity can have. However, as the number of characteristics grows, the
number of combinations (and thus classes) grows exponentially. This is what we refer to as the
cartesian product complexity explosion. It's a situation where for every new variation of a
characteristic, you end up creating a new subclass for every existing class, leading to an explosion
in the number of classes.

📌 The Bridge Pattern's Role: The Bridge Pattern addresses this problem by separating the
different dimensions of variation, allowing them to evolve independently. Instead of having a class
for every combination, you have a class for every variation in each dimension, and you combine
them using composition.

📌 Example - 2 Real-life Use-case Code: Consider a UI system where you have different types of
controls (like Button, Checkbox) and different themes (like Dark, Light). Without the Bridge Pattern,
you might end up with classes like DarkButton, LightButton, DarkCheckbox, LightCheckbox, and so
on.

Let's see how the Bridge Pattern can prevent this complexity explosion:

from abc import ABC, abstractmethod

# Implementor: Theme
class Theme(ABC):
@abstractmethod
def apply(self, control_name: str) -> str:
pass

class DarkTheme(Theme):
def apply(self, control_name: str) -> str:
return f"{control_name} with Dark Theme"

class LightTheme(Theme):
def apply(self, control_name: str) -> str:
return f"{control_name} with Light Theme"

# Abstraction: UI Control
class UIControl(ABC):
def __init__(self, theme: Theme):
self.theme = theme
12
@abstractmethod
def render(self) -> str:
pass

class Button(UIControl):
def render(self) -> str:
return self.theme.apply("Button")

class Checkbox(UIControl):
def render(self) -> str:
return self.theme.apply("Checkbox")

# Client Code
dark_theme = DarkTheme()
button = Button(dark_theme)
print(button.render()) # Button with Dark Theme

light_theme = LightTheme()
checkbox = Checkbox(light_theme)
print(checkbox.render()) # Checkbox with Light Theme

📌 Explanation of the Code: 1. Theme is our Implementor. It represents the different themes a
UI control can have. 2. DarkTheme and LightTheme are concrete implementations of the Theme.
They provide the actual styling details for the controls. 3. UIControl is our Abstraction. It
represents different types of UI controls. 4. Button and Checkbox are refined abstractions. They
represent specific types of controls. 5. Instead of having a class for every combination (like
DarkButton, LightButton, etc.), we have separate hierarchies for controls and themes. A control
maintains a reference to a theme, and when it's rendered, it uses the theme's styling.

📌 Under-the-hood Theory: The Bridge Pattern's power lies in its ability to decouple two
orthogonal dimensions, allowing them to evolve independently. In our example, the UI controls
and themes are orthogonal; any control can have any theme. By separating them into different
hierarchies and combining them using composition, we avoid the need for a class for every
combination. This is how the Bridge Pattern prevents the cartesian product complexity explosion.
If we were to add a new theme or a new control, we wouldn't need to modify existing classes or
create a multitude of new ones.

Let's see what do I mean by "UI controls and themes are


orthogonal"
Let's break down the statement and delve into the concept of orthogonality and how it relates to
the Bridge Pattern.

📌 Orthogonal Dimensions: In mathematics and computer science, when two things are
described as orthogonal, it means they are independent or unrelated. In the context of design
patterns and software architecture, when we say two dimensions are orthogonal, we mean that
changes in one dimension don't affect or depend on changes in the other dimension.

📌 UI Controls and Themes as Orthogonal Dimensions: Taking the example of UI controls (like
Button, Checkbox) and themes (like Dark, Light): - A UI control's primary purpose is to provide a
specific functionality: a button might initiate an action when clicked, and a checkbox might
represent a binary choice. - A theme, on the other hand, deals with the appearance or styling of

13
controls: dark theme might have a black background with white text, while a light theme might
have a white background with black text.

The functionality of a button (its behavior when clicked) is independent of whether it's styled with
a dark or light theme. Similarly, the specifics of a dark theme's styling are independent of whether
it's applied to a button or a checkbox. This independence is what we mean when we say the UI
controls and themes are orthogonal.

📌 Bridge Pattern and Orthogonality: The Bridge Pattern shines in situations where you have
orthogonal dimensions. Instead of intertwining these dimensions (which would lead to the
cartesian product complexity explosion), the Bridge Pattern keeps them separate, allowing you to
combine any variation of one dimension with any variation of the other.

In our example: - You can apply any theme (Dark, Light, etc.) to any UI control (Button, Checkbox,
etc.) without having a class for every combination. - If you introduce a new theme, you don't have
to modify existing UI control classes. Conversely, if you introduce a new UI control, you don't have
to modify existing theme classes.

📌 Benefits of Recognizing and Utilizing Orthogonality: 1. Flexibility: By keeping orthogonal


dimensions separate, you can easily make changes in one dimension without affecting the other.
2. Reduced Complexity: You avoid the cartesian product complexity explosion, leading to fewer
classes and a more maintainable system. 3. Reusability: Components designed with orthogonality
in mind are more reusable. For instance, a theme designed independently of specific controls can
be reused across multiple controls.

In conclusion, understanding and leveraging the concept of orthogonality is crucial in software


design. The Bridge Pattern provides a structured way to handle orthogonal dimensions, ensuring
that systems remain flexible, maintainable, and scalable.

📌 Conclusion:
The Bridge Pattern is a powerful tool in the object-oriented design toolkit. It helps manage
complexity by preventing the cartesian product complexity explosion, which can quickly make a
system unmanageable. By understanding and applying this pattern, you can design systems that
are more modular, easier to understand, and simpler to maintain.

Example - 3 - Another example how The Bridge Pattern


prevents what's called the cartesian product complexity
explosion.
Let's explore the world of computer peripherals, specifically keyboards and their connection types.

Scenario: Imagine you're designing software for keyboards. Keyboards can come in different
layouts (like QWERTY, AZERTY, DVORAK) and can connect to computers using different connection
types (like USB, Bluetooth, or Wireless RF).

Without the Bridge Pattern, you might end up with classes like QWERTYUSBKeyboard ,
AZERTYBluetoothKeyboard , DVORAKWirelessRFKeyboard , and so on. If you have 3 keyboard
layouts and 3 connection types, you'd end up with 3 x 3 = 9 classes.

Let's see how the Bridge Pattern can help:

14
from abc import ABC, abstractmethod

# Implementor: ConnectionType
class ConnectionType(ABC):
@abstractmethod
def connect(self) -> str:
pass

class USBConnection(ConnectionType):
def connect(self) -> str:
return "Connected using USB."

class BluetoothConnection(ConnectionType):
def connect(self) -> str:
return "Connected using Bluetooth."

class WirelessRFConnection(ConnectionType):
def connect(self) -> str:
return "Connected using Wireless RF."

# Abstraction: KeyboardLayout
class KeyboardLayout(ABC):
def __init__(self, connection: ConnectionType):
self.connection = connection

@abstractmethod
def type(self) -> str:
pass

class QWERTYKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on QWERTY layout. {self.connection.connect()}"

class AZERTYKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on AZERTY layout. {self.connection.connect()}"

class DVORAKKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on DVORAK layout. {self.connection.connect()}"

# Client Code
usb = USBConnection()
keyboard1 = QWERTYKeyboard(usb)
print(keyboard1.type()) # Typing on QWERTY layout. Connected using USB.

bluetooth = BluetoothConnection()
keyboard2 = AZERTYKeyboard(bluetooth)
print(keyboard2.type()) # Typing on AZERTY layout. Connected using Bluetooth.

📌 Explanation of the Code: 1. ConnectionType is our Implementor. It represents the different


connection types a keyboard can have. 2. USBConnection, BluetoothConnection, and
WirelessRFConnection are concrete implementations of the ConnectionType. They provide the
actual details of how the keyboard connects. 3. KeyboardLayout is our Abstraction. It represents
different types of keyboard layouts. 4. QWERTYKeyboard, AZERTYKeyboard, and
15
DVORAKKeyboard are refined abstractions. They represent specific types of keyboard layouts. 5.
Instead of having a class for every combination (like QWERTYUSBKeyboard ,
AZERTYBluetoothKeyboard , etc.), we have separate hierarchies for keyboard layouts and
connection types. A keyboard layout maintains a reference to a connection type, and when it's
used, it leverages the connection type's details.

📌 Under-the-hood Theory: The Bridge Pattern's essence is to separate orthogonal dimensions


(in this case, keyboard layouts and connection types) to prevent the cartesian product complexity
explosion. By keeping these dimensions separate and combining them using composition, we can
have any keyboard layout work with any connection type without needing a class for every
combination. This design is scalable; introducing a new keyboard layout or a new connection type
doesn't require creating a multitude of new classes.

📌 Conclusion: The provided code effectively demonstrates the Bridge Design Pattern's power in
preventing the cartesian product complexity explosion. By recognizing and separating orthogonal
dimensions (keyboard layouts and connection types), we've designed a system that's flexible,
scalable, and maintainable.

Example 4 - Real life use case of Design Pattern in Python:


realm of operating systems and device drivers
Imagine we're designing a simple Operating System (OS) interface that communicates with
different types of devices, like printers and scanners. The OS provides a generic interface for all
devices, but the actual implementation (i.e., the driver) for each device type is provided by the
device vendors.

Here's how we can model this scenario using the Bridge Design Pattern:

from abc import ABC, abstractmethod

# Implementor: DeviceDriver
class DeviceDriver(ABC):

@abstractmethod
def connect(self):
pass

@abstractmethod
def disconnect(self):
pass

# ConcreteImplementor: PrinterDriver
class PrinterDriver(DeviceDriver):

def connect(self):
print("Connecting to the printer...")

def disconnect(self):
print("Disconnecting from the printer...")

# ConcreteImplementor: ScannerDriver
class ScannerDriver(DeviceDriver):

def connect(self):
16
print("Connecting to the scanner...")

def disconnect(self):
print("Disconnecting from the scanner...")

# Abstraction: OperatingSystem
class OperatingSystem:

def __init__(self, driver: DeviceDriver):


self.driver = driver

def detect_device(self):
print("Detecting device...")
self.driver.connect()

def eject_device(self):
print("Ejecting device...")
self.driver.disconnect()

# RefinedAbstraction: WindowsOS
class WindowsOS(OperatingSystem):

def detect_device(self):
print("Windows OS:")
super().detect_device()

# RefinedAbstraction: MacOS
class MacOS(OperatingSystem):

def detect_device(self):
print("Mac OS:")
super().detect_device()

# Client Code
printer = PrinterDriver()
windows = WindowsOS(printer)
windows.detect_device()
windows.eject_device()

scanner = ScannerDriver()
mac = MacOS(scanner)
mac.detect_device()
mac.eject_device()

📌 Description of the Example Code: 1. DeviceDriver is the abstract interface (Implementor)


that all device drivers must adhere to. It has methods like connect and disconnect . 2.
PrinterDriver and ScannerDriver are concrete implementations (ConcreteImplementors) of the
DeviceDriver interface. They provide the actual logic to connect and disconnect from the
respective devices. 3. OperatingSystem is the abstraction that communicates with the device
drivers. It provides methods like detect_device and eject_device . 4. WindowsOS and MacOS
are refined abstractions that extend the base OperatingSystem abstraction. They provide
specialized ways to detect and eject devices based on the OS type. 5. In the client code, we create
instances of the PrinterDriver and ScannerDriver. We then use WindowsOS to communicate with
the printer and MacOS to communicate with the scanner.
17
📌 This design allows the OS to communicate with any device without knowing the specifics of the
device. If a new device type is introduced, the OS doesn't need to change. Only a new driver
(conforming to the DeviceDriver interface) needs to be added. This decoupling is the essence of
the Bridge Design Pattern.

For Example-4 - Let's dissect the provided code to


understand how it aligns with the principles and
requirements of the Bridge Design Pattern:
📌 Decoupling Abstraction from Implementation:
Abstraction: The OperatingSystem class serves as the abstraction. It provides a high-level
interface ( detect_device and eject_device ) for all operating systems to communicate
with devices.

# Abstraction: OperatingSystem
class OperatingSystem:

def __init__(self, driver: DeviceDriver):


self.driver = driver

def detect_device(self):
print("Detecting device...")
self.driver.connect()

def eject_device(self):
print("Ejecting device...")
self.driver.disconnect()

Implementation: The DeviceDriver class is the implementation. It provides a basic


interface for all device drivers with methods like connect and disconnect .

# Implementor: DeviceDriver
class DeviceDriver(ABC):

@abstractmethod
def connect(self):
pass

@abstractmethod
def disconnect(self):
pass

In the code, the OperatingSystem doesn't directly implement the device operations but instead
has a reference to a DeviceDriver . This decouples the OS's operations from the device driver's
operations.

📌 Variation in Abstractions and Implementations:


Refined Abstractions: We have WindowsOS and MacOS as refined abstractions. They extend
the base abstraction ( OperatingSystem ) and provide specialized ways to communicate with
devices based on the OS type.
18
Concrete Implementations: We have PrinterDriver and ScannerDriver as concrete
implementations. They provide the actual logic to connect and disconnect from their
respective devices.

# ConcreteImplementor: PrinterDriver
class PrinterDriver(DeviceDriver):

def connect(self):
print("Connecting to the printer...")

def disconnect(self):
print("Disconnecting from the printer...")

# ConcreteImplementor: ScannerDriver
class ScannerDriver(DeviceDriver):

def connect(self):
print("Connecting to the scanner...")

def disconnect(self):
print("Disconnecting from the scanner...")

The code allows both the operating systems and the device drivers to vary independently. For
instance, you can introduce a new type of OS or a new device driver without altering the existing
classes.

📌 Sharing of Implementation: The OperatingSystem abstraction maintains a reference to the


DeviceDriver (implementation). This means that any refined abstraction (like WindowsOS or
MacOS ) can work with any device driver without having the device driver logic embedded within
them. This promotes sharing of the implementation.

# RefinedAbstraction: WindowsOS
class WindowsOS(OperatingSystem):

def detect_device(self):
print("Windows OS:")
super().detect_device()

# RefinedAbstraction: MacOS
class MacOS(OperatingSystem):

def detect_device(self):
print("Mac OS:")
super().detect_device()

📌 Separate Interface for Different Objects: - The DeviceDriver class provides a separate
interface for device drivers with methods like connect and disconnect . - The OperatingSystem
class provides a separate interface for operating systems with methods like detect_device and
eject_device .

This ensures that the responsibilities and operations of operating systems and device drivers are
clearly separated and can evolve independently.
19
📌 Extensibility: The design is extensible in both the abstraction and implementation
dimensions: - If you want to introduce a new type of OS, you can simply create a new class
extending OperatingSystem . - If you want to introduce a new device driver, you can create a new
class implementing the DeviceDriver interface.

In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.

📌 Composition Over Inheritance: Instead of inheriting device driver behavior in the OS classes,
the behavior is composed using a reference to the DeviceDriver object. This promotes flexibility
and reduces the complexity that can arise from deep inheritance hierarchies.

📌 Real-world Analogy: The analogy of an operating system communicating with device drivers is
a fitting real-world example. In actual OS designs, the OS doesn't need to know the specifics of
every device. Instead, it communicates through a standardized interface (like our DeviceDriver ),
and the specifics are handled by the device drivers. This allows new devices to be added without
changing the OS, and new OS versions can be released without changing the drivers.

In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( OperatingSystem and its variants) from their implementations ( DeviceDriver and
its variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.

Note on the @abstractmethod decorator - It indicates that


a method is abstract and must be overridden by any non-
abstract derived class - Let's see how
📌 Decorator: In Python, a decorator is a design pattern that allows you to add new functionality
to an existing object without modifying its structure. Decorators are very powerful and useful tools
in Python since they allow programmers to modify the behavior of functions or classes. In our
context, abstractmethod is a decorator provided by the abc module.

📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.

📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.

📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.

20
Example 5 - Real life use case of Design Pattern in Python :
world of multimedia players and different file formats
Let's explore an example involving the world of multimedia players and different file formats.

Imagine we're designing a multimedia system where we have different types of players (like a
Video Player or an Audio Player) and different file formats (like MP3, MP4, or WAV). The Bridge
pattern can be used to decouple the player type from the file format.

Here's how we can model this scenario using the Bridge Design Pattern:

from abc import ABC, abstractmethod

# Implementor: MediaFile
class MediaFile(ABC):

@abstractmethod
def play(self):
pass

# ConcreteImplementor: MP3File
class MP3File(MediaFile):

def play(self):
print("Playing MP3 file...")

# ConcreteImplementor: MP4File
class MP4File(MediaFile):

def play(self):
print("Playing MP4 video...")

# ConcreteImplementor: WAVFile
class WAVFile(MediaFile):

def play(self):
print("Playing WAV audio...")

# Abstraction: MediaPlayer
class MediaPlayer:

def __init__(self, media_file: MediaFile):


self.media_file = media_file

def play_media(self):
pass

# RefinedAbstraction: VideoPlayer
class VideoPlayer(MediaPlayer):

def play_media(self):
print("Using Video Player:")
self.media_file.play()

# RefinedAbstraction: AudioPlayer
21
class AudioPlayer(MediaPlayer):

def play_media(self):
print("Using Audio Player:")
self.media_file.play()

# Client Code
mp3 = MP3File()
audio_player = AudioPlayer(mp3)
audio_player.play_media()

mp4 = MP4File()
video_player = VideoPlayer(mp4)
video_player.play_media()

wav = WAVFile()
audio_player2 = AudioPlayer(wav)
audio_player2.play_media()

📌 Description of the Example Code: 1. MediaFile is the abstract interface (Implementor) that
all media files must adhere to. It has a method play to play the media. 2. MP3File, MP4File, and
WAVFile are concrete implementations (ConcreteImplementors) of the MediaFile interface. They
provide the actual logic to play their respective file types. 3. MediaPlayer is the abstraction that
communicates with the media files. It provides a method play_media to play the media using a
specific player. 4. VideoPlayer and AudioPlayer are refined abstractions that extend the base
MediaPlayer abstraction. They provide specialized ways to play media based on the player type. 5.
In the client code, we create instances of different media files. We then use different players to
play these files.

📌 This design allows the media player to play any file without knowing the specifics of the file
format. If a new file format is introduced, the player doesn't need to change. Only a new file
format class (conforming to the MediaFile interface) needs to be added. This decoupling is the
essence of the Bridge Design Pattern.

📌 Complexity: This design can be further expanded by adding features like pause , stop , or
rewind to the players and file formats. We can also introduce more refined abstractions like
StreamingPlayer or LocalPlayer and more file formats like AVI or FLAC . The Bridge pattern
will ensure that the design remains modular and maintainable as it grows.

For Example-5 - Let's break down the multimedia system


example to understand its alignment with the principles and
requirements of the Bridge Design Pattern:
📌 Decoupling Abstraction from Implementation: - Abstraction: The MediaPlayer class
serves as the abstraction. It provides a high-level interface ( play_media ) for all media players to
play different media files.

22
# Implementor: MediaFile
class MediaFile(ABC):

@abstractmethod
def play(self):
pass

Implementation: The MediaFile class is the implementation. It provides a basic interface


for all media file formats with a method like play .

In the code, the MediaPlayer doesn't directly implement the media playing operations but
instead has a reference to a MediaFile . This decouples the media player's operations from the
media file's operations.

📌 Variation in Abstractions and Implementations: - Refined Abstractions: We have


VideoPlayer and AudioPlayer as refined abstractions. They extend the base abstraction
( MediaPlayer ) and provide specialized ways to play media based on the player type.

# RefinedAbstraction: VideoPlayer
class VideoPlayer(MediaPlayer):

def play_media(self):
print("Using Video Player:")
self.media_file.play()

# RefinedAbstraction: AudioPlayer
class AudioPlayer(MediaPlayer):

def play_media(self):
print("Using Audio Player:")
self.media_file.play()

Concrete Implementations: We have MP3File , MP4File , and WAVFile as concrete


implementations. They provide the actual logic to play their respective file formats.

# ConcreteImplementor: MP3File
class MP3File(MediaFile):

def play(self):
print("Playing MP3 file...")

# ConcreteImplementor: MP4File
class MP4File(MediaFile):

def play(self):
print("Playing MP4 video...")

# ConcreteImplementor: WAVFile
class WAVFile(MediaFile):

def play(self):
23
print("Playing WAV audio...")

The code allows both the media players and the media file formats to vary independently. For
instance, you can introduce a new type of player or a new file format without altering the existing
classes.

📌 Sharing of Implementation: The MediaPlayer abstraction maintains a reference to the


MediaFile (implementation). This means that any refined abstraction (like VideoPlayer or
AudioPlayer ) can work with any media file format without having the file format logic embedded
within them. This promotes sharing of the implementation.

📌 Separate Interface for Different Objects: - The MediaFile class provides a separate
interface for media file formats with a method like play .

The MediaPlayer class provides a separate interface for media players with a method like
play_media .

# Implementor: MediaFile
class MediaFile(ABC):

@abstractmethod
def play(self):
pass

# Abstraction: MediaPlayer
class MediaPlayer:

def __init__(self, media_file: MediaFile):


self.media_file = media_file

def play_media(self):
pass

This ensures that the responsibilities and operations of media players and media file formats are
clearly separated and can evolve independently.

📌 Extensibility: The design is extensible in both the abstraction and implementation


dimensions: - If you want to introduce a new type of player, you can simply create a new class
extending MediaPlayer . - If you want to introduce a new media file format, you can create a new
class implementing the MediaFile interface.

In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.

📌 Composition Over Inheritance: Instead of inheriting media file behavior in the media player
classes, the behavior is composed using a reference to the MediaFile object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.

📌 Real-world Analogy: The analogy of media players and file formats is a fitting real-world
example. In actual multimedia systems, a player doesn't need to know the specifics of every file
format. Instead, it communicates through a standardized interface (like our MediaFile ), and the
specifics are handled by the file format implementations. This allows new file formats to be added

24
without changing the player, and new player versions can be released without changing the file
formats.

In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( MediaPlayer and its variants) from their implementations ( MediaFile and its
variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.

Example 6 - Real life use case of Design Pattern in Python :


world of graphics and rendering engines.
Imagine we're designing a graphics system where we have different shapes (like Circle, Rectangle)
and different rendering engines (like OpenGL, DirectX). The Bridge pattern can be used to
decouple the shape type from the rendering engine.

Here's how we can model this scenario using the Bridge Design Pattern:

from abc import ABC, abstractmethod

# Implementor: Renderer
class Renderer(ABC):

@abstractmethod
def render(self, shape_name: str):
pass

# ConcreteImplementor: OpenGLRenderer
class OpenGLRenderer(Renderer):

def render(self, shape_name: str):


print(f"Rendering {shape_name} using OpenGL...")

# ConcreteImplementor: DirectXRenderer
class DirectXRenderer(Renderer):

def render(self, shape_name: str):


print(f"Rendering {shape_name} using DirectX...")

# Abstraction: Shape
class Shape(ABC):

def __init__(self, renderer: Renderer):


self.renderer = renderer

@abstractmethod
def draw(self):
pass

# RefinedAbstraction: Circle
class Circle(Shape):

def draw(self):
self.renderer.render("Circle")
25
# RefinedAbstraction: Rectangle
class Rectangle(Shape):

def draw(self):
self.renderer.render("Rectangle")

# Client Code
opengl = OpenGLRenderer()
circle = Circle(opengl)
circle.draw()

directx = DirectXRenderer()
rectangle = Rectangle(directx)
rectangle.draw()

circle2 = Circle(directx)
circle2.draw()

📌 Description of the Example Code: 1. Renderer is the abstract interface (Implementor) that all
rendering engines must adhere to. It has a method render to render a given shape. 2.
OpenGLRenderer and DirectXRenderer are concrete implementations (ConcreteImplementors)
of the Renderer interface. They provide the actual logic to render shapes using their respective
graphics engines. 3. Shape is the abstraction that communicates with the rendering engines. It
provides a method draw to draw the shape using a specific renderer. 4. Circle and Rectangle are
refined abstractions that extend the base Shape abstraction. They provide specialized ways to
draw themselves. 5. In the client code, we create instances of different rendering engines. We then
use different shapes to draw themselves using these engines.

📌 This design allows the shape to be drawn using any rendering engine without knowing the
specifics of the rendering method. If a new rendering engine or a new shape is introduced, the
existing classes don't need to change. This decoupling is the essence of the Bridge Design Pattern.

📌 Complexity: This design can be further expanded by adding features like shading, coloring, or
transformations to the shapes and rendering engines. We can also introduce more refined
abstractions like Triangle or Polygon and more rendering engines like Vulkan . The Bridge
pattern will ensure that the design remains modular and maintainable as it grows.

For Example-6 above, let's dissect the graphics system


example to understand its alignment with the principles and
requirements of the Bridge Design Pattern:
📌 Decoupling Abstraction from Implementation: - Abstraction: The Shape class is our
abstraction. It provides a high-level interface ( draw ) for all shapes to be rendered.

26
# Abstraction: Shape
class Shape(ABC):

def __init__(self, renderer: Renderer):


self.renderer = renderer

@abstractmethod
def draw(self):
pass

Implementation: The Renderer class represents the implementation. It provides a basic


interface for all rendering engines with methods like render .

# Implementor: Renderer
class Renderer(ABC):

@abstractmethod
def render(self, shape_name: str):
pass

In the code, the Shape doesn't directly handle the specifics of rendering. Instead, it delegates this
responsibility to the Renderer . This ensures that the shape's drawing operations are decoupled
from the specifics of the rendering operations.

📌 Variation in Abstractions and Implementations: - Refined Abstractions: We have Circle


and Rectangle as refined abstractions. They extend the base abstraction ( Shape ) and provide
specialized ways to draw themselves.

# RefinedAbstraction: Circle
class Circle(Shape):

def draw(self):
self.renderer.render("Circle")

# RefinedAbstraction: Rectangle
class Rectangle(Shape):

def draw(self):
self.renderer.render("Rectangle")

Concrete Implementations: We have OpenGLRenderer and DirectXRenderer as concrete


implementations. They provide the actual logic to render shapes using their respective
graphics engines.

27
# ConcreteImplementor: OpenGLRenderer
class OpenGLRenderer(Renderer):

def render(self, shape_name: str):


print(f"Rendering {shape_name} using OpenGL...")

# ConcreteImplementor: DirectXRenderer
class DirectXRenderer(Renderer):

def render(self, shape_name: str):


print(f"Rendering {shape_name} using DirectX...")

The design allows both the shapes and the rendering engines to evolve independently. For
instance, if we want to introduce a new type of shape or a new rendering engine, we can do so
without altering the existing classes.

📌 Sharing of Implementation: The Shape abstraction maintains a reference to the Renderer


(implementation). This means that any refined abstraction (like Circle or Rectangle ) can work
with any rendering engine without having the rendering logic embedded within them. This
promotes the sharing of the implementation.

# Implementor: Renderer
class Renderer(ABC):

@abstractmethod
def render(self, shape_name: str):
pass

# RefinedAbstraction: Rectangle
class Rectangle(Shape):

def draw(self):
self.renderer.render("Rectangle")

📌 Separate Interface for Different Objects: - The Renderer class provides a distinct interface
for rendering engines with methods like render . - The Shape class offers a separate interface for
shapes with methods like draw .

This ensures that the responsibilities and operations of shapes and rendering engines are clearly
separated, allowing each to evolve independently.

📌 Extensibility: The design is extensible in both the abstraction and implementation


dimensions: - If you want to introduce a new type of shape, you can simply create a new class
extending Shape . - If you want to introduce a new rendering engine, you can create a new class
implementing the Renderer interface.

In both scenarios, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.

📌 Composition Over Inheritance: Rather than inheriting rendering behavior in the shape
classes, the behavior is composed using a reference to the Renderer object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.

28
📌 Real-world Analogy: In real-world graphics systems, a shape doesn't need to know the
specifics of every rendering engine. Instead, it communicates through a standardized interface
(like our Renderer ), and the specifics are handled by the rendering engine implementations. This
allows new rendering engines to be added without changing the shape, and new shape types can
be introduced without changing the rendering engines.

In conclusion, the provided code effectively embodies the Bridge Design Pattern by decoupling
abstractions ( Shape and its variants) from their implementations ( Renderer and its variants),
allowing them to vary independently, promoting composition over inheritance, and ensuring
extensibility in both dimensions.

🐍🚀 Mediator Design pattern in Python 🐍🚀

It's a behavioral design pattern that lets you reduce chaotic dependencies between objects. The
pattern restricts direct communications between the objects and forces them to collaborate only
via a mediator object.

📌 At its core, the Mediator Design Pattern is about promoting loose coupling between objects.
The mediator knows about each colleague object and facilitates the interaction between them.

📌 Why Use the Mediator Pattern?: - It simplifies object communication. Objects no longer need
to know the details of communication with other objects. - It centralizes external communications.
If you need to change the way objects talk to each other, you only have to update the mediator. - It
promotes a single responsibility principle. Objects focus on their own logic, while the mediator
takes care of communication logic.

📌 Real-life Analogy: Think of an air traffic control tower at an airport. Planes don't communicate
directly with each other to decide who lands first or which runway to use. Instead, they
communicate with the control tower, which ensures that planes land and take off without any
incidents.

29
📌 Use Cases: - Chat applications where the server (mediator) handles messages and broadcasts
them to clients. - GUI where buttons and actions are coordinated through a central controller. -
Workflow systems where tasks are orchestrated through a central process.

Let's dive into the Mediator Design Pattern in Python.

📌 First, let's look at a code example that does not


implement the Mediator Design Pattern, leading to some
negative issues:

class Alarm:
def start(self):
print("Alarm is set!")
coffee_machine = CoffeeMachine()
coffee_machine.start()

class CoffeeMachine:
def start(self):
print("Coffee is brewing!")
curtains = Curtains()
curtains.open()

class Curtains:
def open(self):
print("Curtains are opened!")

When you run:

alarm = Alarm()
alarm.start()

You'll get:

Alarm is set!
Coffee is brewing!
Curtains are opened!

📌 Issues with this code:


Tight Coupling: Each class is directly dependent on another class. This makes the system
rigid, less reusable, and harder to maintain.

Violation of Single Responsibility Principle: Each class is doing more than one thing. For
instance, the Alarm class is also responsible for starting the coffee machine.

📌 Now, let's refactor the code using the Mediator Design


Pattern:

class Mediator:
def __init__(self):
self.alarm = Alarm(self)
self.coffee_machine = CoffeeMachine(self)
30
self.curtains = Curtains(self)

def trigger_alarm(self):
self.alarm.start()
self.coffee_machine.start()

def brew_coffee(self):
self.coffee_machine.start()
self.curtains.open()

class Alarm:
def __init__(self, mediator):
self.mediator = mediator

def start(self):
print("Alarm is set!")
self.mediator.brew_coffee()

class CoffeeMachine:
def __init__(self, mediator):
self.mediator = mediator

def start(self):
print("Coffee is brewing!")
self.mediator.curtains.open()

class Curtains:
def __init__(self, mediator):
self.mediator = mediator

def open(self):
print("Curtains are opened!")

When you run:

mediator = Mediator()
mediator.trigger_alarm()

You'll get the same output:

Alarm is set!
Coffee is brewing!
Curtains are opened!

📌 Benefits of this refactored code: - Loose Coupling: The classes are now decoupled from each
other. They only communicate through the mediator, making the system more flexible and
maintainable. - Single Responsibility Principle: Each class now has a single responsibility. The
Alarm class, for instance, only deals with the alarm and not the coffee machine.

📌 Under-the-hood:

31
The Mediator Design Pattern promotes the use of a single object to handle communication
between different classes. This central object is known as the mediator. By doing so, it decouples
the classes, leading to a system where components are easier to understand, maintain, and
extend. The mediator becomes the only component that needs to know about the internals of
other classes, ensuring that changes in one class don't affect others. This pattern is particularly
useful in scenarios where multiple classes are interdependent, leading to a complex web of
relationships. By centralizing external communications, the Mediator pattern prevents this web of
relationships, making the system more modular.

Let's delve into the details of how the refactored code using
the Mediator Design Pattern addresses the issues present in
the original code.
📌 Issue 1: Tight Coupling
In the original code, each class was directly dependent on another class. For instance: - The Alarm
class directly instantiated and called a method on the CoffeeMachine class. - The CoffeeMachine
class directly instantiated and called a method on the Curtains class.

This tight coupling means that if you wanted to change the behavior of one class, it could
potentially affect the others. For example, if you wanted the CoffeeMachine to not interact with
the Curtains , you'd have to modify the CoffeeMachine class directly, which is not ideal.

📌 Solution with Mediator Pattern:


In the refactored code, the Mediator class acts as the central authority that manages the
interactions between the different classes. The individual classes ( Alarm , CoffeeMachine , and
Curtains ) no longer directly communicate with each other. Instead, they communicate through
the mediator.

For instance: - When the Alarm is started, it informs the mediator to brew coffee. It doesn't need
to know about the CoffeeMachine class directly. - The CoffeeMachine , when started, informs the
mediator to open the curtains. Again, it doesn't directly know about the Curtains class.

This decouples the classes from each other, making the system more flexible. If you now wanted
to change the behavior of the CoffeeMachine , you'd only need to modify the mediator's behavior,
leaving the CoffeeMachine class untouched.

📌 Issue 2: Violation of Single Responsibility Principle


In the original code, each class was doing more than its primary responsibility: - The Alarm class
was not only setting the alarm but also starting the coffee machine. - The CoffeeMachine class
was brewing coffee and also opening the curtains.

This mixing of responsibilities makes the code harder to maintain and understand.

📌 Solution with Mediator Pattern:


With the mediator in place, each class now sticks to its primary responsibility: - The Alarm class
only deals with setting the alarm. - The CoffeeMachine class only deals with brewing coffee. - The
Curtains class only deals with opening the curtains.

32
The responsibility of coordinating these actions is now with the mediator. This ensures that each
class adheres to the Single Responsibility Principle, making the system more modular and easier
to maintain.

📌 Conclusion:
The Mediator Design Pattern effectively addresses the issues of tight coupling and violation of the
Single Responsibility Principle by centralizing the communication between classes. This not only
makes the system more maintainable and flexible but also ensures that each class remains
focused on its primary responsibility.

📌 Another Real-life Use-case Code:


Let's consider a chat room application where users send messages to a chat room, and the chat
room broadcasts the message to all users.

class ChatRoomMediator:
def __init__(self):
self.users = []

def add_user(self, user):


self.users.append(user)

def send_message(self, message, sender):


for user in self.users:
if user != sender:
user.receive(message)

class User:
def __init__(self, name, chat_mediator):
self.name = name
self.chat_mediator = chat_mediator
chat_mediator.add_user(self)

def send(self, message):


print(f"{self.name} sends: {message}")
self.chat_mediator.send_message(message, self)

def receive(self, message):


print(f"{self.name} received: {message}")

# Usage
mediator = ChatRoomMediator()
john = User("John", mediator)
alice = User("Alice", mediator)

john.send("Hello, Alice!")
alice.send("Hey, John!")

📌 Code Explanation: - ChatRoomMediator is the mediator class. It keeps track of all users in the
chat room and handles message broadcasting. - User class represents individual users. Each user
has a reference to the mediator to send messages. - When a user sends a message, it's passed to
the mediator. The mediator then broadcasts the message to all other users. - In the usage
33
example, John sends a message to Alice through the mediator, and Alice responds back. The
mediator ensures that the message is delivered without John and Alice communicating directly.

📌 Under-the-hood: When you look at the Mediator pattern, it might seem like an overhead. Why
introduce an additional layer when objects can communicate directly? But the beauty lies in the
separation of concerns. The mediator encapsulates the communication logic, making the system
more modular. If you need to change the way communication happens, you only touch the
mediator, not every individual object. This is in line with the Open/Closed Principle: software
entities should be open for extension but closed for modification.

📌 Final Thoughts: The Mediator pattern is powerful when you have a set of objects that need to
communicate in complex ways, but you want to avoid tight coupling. By centralizing external
communications, you make the system more maintainable and adaptable to change.

Let's see another example WITH and then WITHOUT the


"Mediator Design pattern in Python`"
📌 Let's tackle this with a scenario often seen in software development: a GUI application that
allows multiple widgets to interact. The example I'm about to present will involve a Button , a
TextBox , and a Label :

Scenario:
A GUI has a TextBox where users type a message, a Button they click to submit the message,
and a Label that displays the most recent message. Without the Mediator, each widget would
need to know about the others, leading to tight coupling.

1. Without the Mediator Pattern:

class Button:
def __init__(self, text_box, label):
self.text_box = text_box
self.label = label

def click(self):
self.label.display(self.text_box.text)

class TextBox:
def __init__(self):
self.text = ""

def set_text(self, text):


self.text = text

class Label:
def __init__(self):
self.displayed_text = ""

def display(self, text):


self.displayed_text = text
print(f"Label: {self.displayed_text}")

# Usage:
34
textbox = TextBox()
label = Label()
button = Button(textbox, label)
textbox.set_text("Hello from the GUI!")
button.click() # Outputs: "Label: Hello from the GUI!"

📌 Issues: - Button is tightly coupled with both TextBox and Label . - If we introduce another
widget, say LogBox to log all messages, then Button will need to change. - Widgets are not
focused on their single responsibility; they are also aware of other widget's behaviors.

2. Implementing the Mediator Pattern:

from abc import ABC, abstractmethod

# Define the mediator interface


class Mediator(ABC):
@abstractmethod
def notify(self, sender, event):
pass

# Concrete Mediator
class GuiMediator(Mediator):
def __init__(self, text_box, button, label):
self.text_box = text_box
self.button = button
self.label = label

self.text_box.mediator = self
self.button.mediator = self
self.label.mediator = self

def notify(self, sender, event):


if sender == self.button and event == "click":
self.label.display(self.text_box.text)

class Button:
def __init__(self):
self.mediator = None

def click(self):
self.mediator.notify(self, "click")

class TextBox:
def __init__(self):
self.text = ""
self.mediator = None

def set_text(self, text):


self.text = text

class Label:
def __init__(self):
self.displayed_text = ""
self.mediator = None

35
def display(self, text):
self.displayed_text = text
print(f"Label: {self.displayed_text}")

# Usage:
textbox = TextBox()
button = Button()
label = Label()
mediator = GuiMediator(textbox, button, label)
textbox.set_text("Hello using Mediator!")
button.click() # Outputs: "Label: Hello using Mediator!"

📌 Benefits:
📌 The Mediator Pattern works as a centralized communication hub for various objects, ensuring
that these objects do not communicate with each other directly. Let's analyze how this refactored
code solves the initial problems:

1. Decoupling of Components:

In the original code, the Button was directly coupled with both the TextBox and
Label . This means the Button had to be aware of the implementations of both these
classes, and any changes to them would potentially affect the Button .

With the Mediator pattern in place, the Button , TextBox , and Label no longer
communicate with each other directly. Instead, they are aware of the mediator, and they
notify the mediator when something of interest happens. This drastically reduces the
dependencies between the individual components.

In our refactored code, the Button only notifies the mediator about its click event. It
doesn't need to know what happens next or which components need to be informed.

2. Ease of Extensibility:

Consider we want to introduce a new widget, say a LogBox , to log every message
submitted. In the non-mediator design, this would necessitate changes in the Button
class (or wherever the central logic of handling a button click resides).

With the mediator in place, introducing such a widget would only involve changes within
the mediator, making the system more maintainable. Components remain unchanged,
preserving the Open/Closed Principle (i.e., a module should be open for extension but
closed for modification).

3. Single Responsibility Principle (SRP):

In the original design, the Button had multiple responsibilities. It had its own behavior
(being clicked) and also controlled how other objects (like Label ) responded.

The refactored code respects SRP more closely. Each component is responsible for its
own behavior. The logic of how components interact or how events are handled is
moved to the mediator. This separation ensures that if there's a bug in the interaction
logic, you'd look into the mediator, and if there's an issue with how a button works,
you'd inspect the Button class.

4. Centralized External Communication:

With the mediator in place, all external communications (inter-component


communications) are centralized. This has two key benefits:
36
1. If you need to change the way objects talk to each other or introduce new
communication protocols or logic, you only have to modify the mediator. The
individual components remain unaffected.

2. Debugging becomes easier. Since all interactions pass through the mediator,
logging or monitoring inter-component communications can be done at a single
place.
5. Flexibility in Event Handling:

In the initial design, the response to an event (like the button click) was hardcoded. In
the refactored design, the mediator can be programmed to handle various events in
flexible ways. For instance, it might decide to not update the Label if the TextBox is
empty, without requiring any changes to either the Button or the Label .

📌 In conclusion, the Mediator pattern in the refactored code promotes a more maintainable,
scalable, and decoupled design. It offers a clear separation of concerns and encapsulates the
interaction logic between components within the mediator, making it easier to modify, extend,
and manage, especially in large-scale software systems.

Let's see yet another example WITH and then WITHOUT the
"Mediator Design pattern in Python`"
📌 Let's consider an e-commerce application where different components, like the Cart , User ,
and Inventory , interact with each other:

Scenario:
When a User adds an item to the Cart , the Inventory should be updated to reflect that the
item count has decreased. Similarly, when the User removes an item from the Cart , the
Inventory should update to increase the item count. Without the Mediator, the User would
need to know about both the Cart and the Inventory .

1. Without the Mediator Pattern:

class Cart:
def __init__(self, inventory):
self.items = {}
self.inventory = inventory

def add_item(self, item_name, count):


self.items[item_name] = self.items.get(item_name, 0) + count
self.inventory.decrement_item(item_name, count)

def remove_item(self, item_name, count):


if item_name in self.items and self.items[item_name] >= count:
self.items[item_name] -= count
if self.items[item_name] == 0:
del self.items[item_name]
self.inventory.increment_item(item_name, count)

class Inventory:
def __init__(self):
self.items_count = {}
37
def set_item_count(self, item_name, count):
self.items_count[item_name] = count

def decrement_item(self, item_name, count):


if item_name in self.items_count:
self.items_count[item_name] -= count

def increment_item(self, item_name, count):


if item_name in self.items_count:
self.items_count[item_name] += count

class User:
def __init__(self, cart):
self.cart = cart

def purchase_item(self, item_name, count):


self.cart.add_item(item_name, count)

def return_item(self, item_name, count):


self.cart.remove_item(item_name, count)

# Usage:
inventory = Inventory()
inventory.set_item_count("book", 10)
cart = Cart(inventory)
user = User(cart)
user.purchase_item("book", 2)
print(inventory.items_count) # Outputs: {"book": 8}
user.return_item("book", 1)
print(inventory.items_count) # Outputs: {"book": 9}

📌 Issues: - User and Cart are both tightly coupled to Inventory . Any changes in how
Inventory operates might necessitate changes in User and Cart . - Extensibility is a concern.
Suppose we want to introduce features like promotional offers or logging. Integrating these would
require modifications in multiple places. - There's a lack of a clear separation of concerns. If we
want to change how items are added to the cart or how inventory updates, we'd have to dig into
both Cart and Inventory .

2. Implementing the Mediator Pattern:

from abc import ABC, abstractmethod

# Define the mediator interface


class Mediator(ABC):
@abstractmethod
def notify(self, sender, event, *args):
pass

# Concrete Mediator
class ECommerceMediator(Mediator):
def __init__(self, cart, user, inventory):
self.cart = cart
self.user = user
38
self.inventory = inventory

self.cart.mediator = self
self.user.mediator = self
self.inventory.mediator = self

def notify(self, sender, event, *args):


if sender == self.user:
if event == "purchase":
self.cart.add_item(*args)
self.inventory.decrement_item(*args)
elif event == "return":
self.cart.remove_item(*args)
self.inventory.increment_item(*args)

# Remaining classes are similar but with the mediator logic:


# ... (For brevity, just showing key modifications) ...

class Cart:
def __init__(self):
self.items = {}
self.mediator = None
# Rest remains the same but without direct calls to Inventory

class User:
def __init__(self):
self.mediator = None

def purchase_item(self, item_name, count):


self.mediator.notify(self, "purchase", item_name, count)
# ... and similarly for return_item ...

# Usage remains similar.

📌 By introducing the mediator:


We've eliminated direct dependencies between User , Cart , and Inventory .

Changes to the behavior of any component would be localized, improving maintainability.

Introducing new features or changing business rules is centralized in the mediator, leading to
cleaner, more manageable code.

Let's actually analyze in detail the benefits of the refactored


code after introducing the mediator
📌 Tight Coupling: Original Issue: The User and Cart classes were tightly coupled with the
Inventory . This means that any change in the Inventory class's method or properties might
necessitate changes in both the User and Cart .

Solution with Mediator: With the introduction of the ECommerceMediator , the individual
components ( User , Cart , Inventory ) don't communicate directly with each other. Instead, they
communicate via the mediator. This reduces the tight coupling between them. If, for instance, the
mechanism for updating the inventory changes, the mediator will be the only component needing

39
adjustment, thus isolating the change and preventing ripple effects through other parts of the
code.

📌 Extensibility Concern: Original Issue: The direct dependencies between classes made it hard
to introduce new features. For example, integrating a new feature like promotional offers would
require modifications in multiple classes.

Solution with Mediator: The mediator allows for a more centralized approach to handle
communications. If a new feature needs to be added, like handling promotional offers when a
user purchases an item, the logic can be added primarily within the mediator, without heavily
modifying the existing classes.

📌 Separation of Concerns: Original Issue: The initial design did not clearly separate concerns.
The responsibility of updating the inventory was mixed into both the Cart and Inventory .

Solution with Mediator: By introducing the mediator, responsibilities became clearer. The Cart
is now mainly concerned with managing items within itself, the Inventory deals with stock
counts, and the mediator ensures that actions in one component lead to appropriate reactions in
others. The mediator takes on the responsibility of orchestrating the interactions, thus providing a
clear separation of concerns.

📌 Centralization of Logic: Original Issue: The communication logic was spread out. When a
user wanted to purchase an item, the logic to add the item to the cart and update the inventory
was in multiple places.

Solution with Mediator: All communication logic is centralized in the mediator. When a user
wishes to purchase an item, they notify the mediator, which then instructs both the Cart and
Inventory on what actions to take. This not only makes the flow of logic clearer but also
simplifies potential future changes.

📌 Flexibility: Original Issue: If we needed to change how components interacted, we'd likely
need to modify multiple classes.

Solution with Mediator: With the mediator in place, changes to interactions primarily occur
within the mediator class. For instance, if we wanted to introduce a logging mechanism every time
the inventory changed, we'd implement it within the mediator, without needing to touch the
actual Inventory class.

In summary, the mediator pattern provides a way to reduce dependencies between classes,
ensuring that each class adheres to the single responsibility principle, thus making the system
more maintainable and flexible. It offers a clean and centralized way to manage interactions,
making the system easier to understand and extend.

Let's see yet another example WITH and then WITHOUT the
"Mediator Design pattern in Python`"
📌 Scenario: Consider a Home Automation System where various devices (components) like
lights, thermostats, and security systems need to communicate with each other. Without a
mediator, every device would have direct references to other devices, leading to a tangled web of
dependencies.

40
Without the Mediator Pattern:

class Light:
def __init__(self):
self.is_on = False

def turn_on(self):
self.is_on = True
print("Light turned on.")

def turn_off(self):
self.is_on = False
print("Light turned off.")

class Thermostat:
def __init__(self):
self.temperature = 20 # default temperature in Celsius

def increase_temperature(self, value):


self.temperature += value
print(f"Temperature set to {self.temperature}°C")
if self.temperature > 25:
light.turn_off() # Direct reference to a light instance

def decrease_temperature(self, value):


self.temperature -= value
print(f"Temperature set to {self.temperature}°C")

class SecuritySystem:
def __init__(self):
self.is_armed = False

def arm(self):
self.is_armed = True
light.turn_off() # Direct reference to a light instance
thermostat.increase_temperature(5) # Direct reference to a thermostat
instance
print("Security system armed.")

def disarm(self):
self.is_armed = False
light.turn_on() # Direct reference to a light instance
thermostat.decrease_temperature(5) # Direct reference to a thermostat
instance
print("Security system disarmed.")

light = Light()
thermostat = Thermostat()
security_system = SecuritySystem()

41
Refactoring using the Mediator Pattern:

class HomeMediator:
def __init__(self):
self.light = Light(self)
self.thermostat = Thermostat(self)
self.security_system = SecuritySystem(self)

def turn_on_light(self):
self.light.turn_on()

def turn_off_light(self):
self.light.turn_off()

def increase_temperature(self, value):


self.thermostat.increase_temperature(value)
if self.thermostat.temperature > 25:
self.turn_off_light()

def decrease_temperature(self, value):


self.thermostat.decrease_temperature(value)

def arm_security(self):
self.security_system.arm()

def disarm_security(self):
self.security_system.disarm()

class Light:
def __init__(self, mediator):
self.mediator = mediator
self.is_on = False

def turn_on(self):
self.is_on = True
print("Light turned on.")

def turn_off(self):
self.is_on = False
print("Light turned off.")

class Thermostat:
def __init__(self, mediator):
self.mediator = mediator
self.temperature = 20

def increase_temperature(self, value):


self.temperature += value
print(f"Temperature set to {self.temperature}°C")

def decrease_temperature(self, value):


self.temperature -= value
print(f"Temperature set to {self.temperature}°C")
42
class SecuritySystem:
def __init__(self, mediator):
self.mediator = mediator
self.is_armed = False

def arm(self):
self.is_armed = True
self.mediator.turn_off_light()
self.mediator.increase_temperature(5)
print("Security system armed.")

def disarm(self):
self.is_armed = False
self.mediator.turn_on_light()
self.mediator.decrease_temperature(5)
print("Security system disarmed.")

home_mediator = HomeMediator()

📌 Note: By using the Mediator pattern, the classes Light , Thermostat , and SecuritySystem
are decoupled from each other. Each device communicates through the HomeMediator , which
centralizes the interactions.

Let's actually analyze in detail the benefits of the refactored


code after introducing the mediator
📌 Decoupling of Objects: In the initial code, each component (like Light , Thermostat , and
SecuritySystem ) had direct references to other components. This tightly couples these
components together. For instance, Thermostat had a direct dependency on Light . In the
refactored version, the mediator (i.e., HomeMediator ) acts as an intermediary. As a result, each
component only communicates with the mediator, and not with each other directly. This ensures
that the objects are decoupled, promoting a modular design where changes in one component
don't necessitate changes in others.

📌 Centralized Control: With the introduction of the mediator, all interactions among objects are
centralized within HomeMediator . If there's a need to modify how devices communicate or change
the logic of one device based on the state of another, it only needs to be updated within the
mediator, rather than scattered across multiple classes.

📌 Simplifying Maintenance: As the system evolves and grows, perhaps with the addition of
more devices and functionalities, it's significantly easier to manage and maintain this centralized
approach. For example, introducing a new device means adding it to the HomeMediator without
needing to modify existing devices. In contrast, in the non-mediator approach, adding or removing
a component could necessitate changes in several other components due to direct references.

📌 Enhanced Reusability: With the mediator pattern, each class like Light , Thermostat , or
SecuritySystem can function independently without the need for direct references to other
classes. This means that these classes are now more reusable. They can be used in another
system or context without dragging along other unrelated components.

43
📌 Single Responsibility Principle: Each class now focuses purely on its own functionalities
without concerning itself with the broader system's orchestration logic. This adherence to the
Single Responsibility Principle ensures that each class has one reason to change, making the
system more robust and easier to understand.

📌 Scalability: The mediator approach is scalable. As more components are added to the home
automation system, the design remains consistent. Components still only need to be aware of the
mediator. Without the mediator, the complexity would grow exponentially, as each new
component could potentially require updates in every other component it interacts with.

📌 Easier Testing: With the decoupling achieved using the Mediator pattern, it becomes easier to
write unit tests for each individual component without having to mock or set up many other
components. Each component can be tested in isolation with only the mediator being mocked.

In summary, by employing the Mediator Design Pattern, the home automation system's
architecture becomes cleaner, more maintainable, scalable, and easier to test. It effectively
resolves the issues associated with the tight coupling of components seen in the initial approach.

🐍🚀 Prototype Design Pattern in Python. 🐍🚀

📌 The Prototype Design Pattern is a creational design pattern that allows an object to create a
copy of itself. This pattern is particularly useful when the creation of an object is more costly than
copying an existing object.

The prototype pattern is useful when you need to create objects based on an existing object using
the cloning technique. As you may have guessed, the idea is to use a copy of that object's
complete structure to produce the new object. We will see that this is almost natural in Python
because we have a copy feature that helps greatly in using this technique. In the general case of
creating a copy of an object, what happens is that you make a new reference to the same object, a
method called a shallow copy. But if you need to duplicate the object, which is the case with a
prototype, you make a deep copy.
44
📌 In Python, we have two ways to copy objects:
Shallow Copy: It creates a new object, but does not create copies of the objects that the
original object references. Instead, it just copies the references.

Deep Copy: It creates a new object and also recursively copies all the objects referenced by
the original object.

📌 When you perform a deep copy, Python recursively copies


all objects referenced by the original object. This means if
your object contains references to lists, dictionaries, or other
objects, a new separate instance of those will be created.
This ensures that the cloned object is entirely independent
of the original.

Implementation of Prototype Design Pattern:


Let's start with a simple example to demonstrate the Prototype pattern.

import copy

class Prototype:
def __init__(self):
self._objects = {}

def register_object(self, name, obj):


"""Register an object."""
self._objects[name] = obj

def unregister_object(self, name):


"""Unregister an object."""
del self._objects[name]

def clone(self, name, **attrs):


"""Clone a registered object and update its attributes."""
obj = copy.deepcopy(self._objects.get(name))
obj.__dict__.update(attrs)
return obj

class Car:
def __init__(self):
self.name = "Skylark"
self.color = "Red"
self.options = "Ex"

def __str__(self):
return f"{self.name} | {self.color} | {self.options}"

c = Car()
prototype = Prototype()
prototype.register_object("skylark", c)
45
c1 = prototype.clone("skylark")
print(c1)

c2 = prototype.clone("skylark", color="Blue")
print(c2)

Above will give me the below output

Skylark | Red | Ex
Skylark | Blue | Ex

📌 In the above code, we have a Prototype class that can register, unregister, and clone objects.
The Car class is a simple class with some attributes. We then create an instance of the Car class,
register it with the prototype, and then clone it with optional attribute modifications.

📌 The __dict__.update(attrs) method is used to update the attributes of the cloned object.
The __dict__ attribute of an object is a dictionary containing the object's writable attributes.

📌 The copy.deepcopy method is used to create a deep copy of the registered object. This
ensures that the cloned object is independent of the original object.

📌 When you perform a deep copy, Python recursively copies


all objects referenced by the original object. This means if
your object contains references to lists, dictionaries, or other
objects, a new separate instance of those will be created.
This ensures that the cloned object is entirely independent
of the original.

Why and How the Prototype design pattern is helping


here?
The Prototype class serves as a registry for objects that can be cloned. It provides methods to
register, unregister, and clone these objects. The primary goal is to abstract away the cloning
process, making it easier to manage and more systematic.

📌 How the Prototype Class Works in the above example code:


1. Object Registry:

The Prototype class maintains a private dictionary _objects that acts as a registry of
objects that can be cloned.

The register_object method allows you to register an object with a unique name,
making it available for cloning later.

The unregister_object method allows you to remove an object from the registry.

2. Cloning Mechanism:

The clone method is the heart of the Prototype class. It takes the name of the
registered object you want to clone and any additional attributes you want to modify in
the cloned object.
46
It uses copy.deepcopy to create a deep copy of the registered object. This ensures that
the cloned object is entirely independent of the original.

After cloning, it updates the attributes of the cloned object using the
obj.__dict__.update(attrs) method.

📌 Why the Prototype Class is Important:


1. Abstraction:

The Prototype class abstracts the process of object cloning. Instead of manually using
copy.deepcopy every time you want to clone an object, you can use the clone method,
which also provides additional functionality like attribute updates.

2. Flexibility:

The design allows you to easily register and unregister objects, making it flexible to
manage which objects are available for cloning at any given time.

3. Consistency:

By centralizing the cloning process in the Prototype class, you ensure consistent
behavior. Every cloned object is guaranteed to be a deep copy, ensuring no unintended
side effects from shared references.

4. Efficiency:

In scenarios where object instantiation is expensive (e.g., involving database operations,


complex computations, or network calls), cloning an existing object can be more
efficient. The Prototype pattern, through the Prototype class, provides a structured
way to leverage this efficiency.

📌 Why Should You Care?:


1. Ease of Use:

The Prototype class simplifies the process of cloning objects. You don't need to
remember the intricacies of deep copying or attribute updating -- the class handles it for
you.

2. Scalability:

As your application grows, you might have multiple objects that need to be cloned with
slight variations. The Prototype class provides a scalable solution to manage and clone
these objects systematically.

3. Design Principles:

Using the Prototype pattern adheres to the principle of "programming to an interface,


not an implementation." You're not tied to a specific way of creating objects. Instead,
you have the flexibility to decide whether to instantiate a new object or clone an existing
one based on your application's needs.

In essence, the Prototype class provides a structured and efficient way to implement the
Prototype design pattern in Python. It abstracts the cloning process, ensuring consistency and
flexibility, making it easier for developers to leverage the benefits of object cloning in their
applications.

47
Let's see one example WITH and then WITHOUT Prototype
Design Pattern in Python.
📌 Without Prototype Design Pattern:
Consider a scenario where we have a class ComplexObject that takes a significant amount of time
to instantiate. This could be because it fetches data from a database, computes some values, or
any other costly operation.

class ComplexObject:
def __init__(self, data):
# Simulating a costly operation
import time
time.sleep(2)
self.data = data

def display(self):
print(self.data)

# Creating two objects


obj1 = ComplexObject("Data for Object 1")
obj2 = ComplexObject("Data for Object 2")

obj1.display()
obj2.display()

📌 Issues with the above code:


1. Every time we create a new instance of ComplexObject , we have to wait for the costly
operation to complete.

2. If we want to create a new object that's similar to an existing object but with some minor
changes, we still have to go through the costly instantiation process.

📌 With Prototype Design Pattern:


To implement the Prototype Design Pattern, we'll use Python's copy module, which provides the
deepcopy method for creating deep copies of objects.

import copy

class ComplexObject:
def __init__(self, data):
# Simulating a costly operation
import time
time.sleep(2)
self.data = data

def display(self):
print(self.data)

def clone(self, data=None):


# Create a deep copy of the current object
new_obj = copy.deepcopy(self)
# If there's new data provided, update the object's data
48
if data:
new_obj.data = data
return new_obj

# Creating an object
obj1 = ComplexObject("Data for Object 1")
obj1.display()

# Using the prototype pattern to create a new object based on obj1


obj2 = obj1.clone("Data for Object 2")
obj2.display()

📌 Benefits of using the Prototype Design Pattern:


1. We only go through the costly instantiation process once. For subsequent objects that are
similar, we just clone the existing object.

2. It's easier to produce variations of an object since we can clone and then modify.

3. Reduces the need for subclasses.

In the refactored code, the clone method in the ComplexObject class creates a deep copy of the
current object and allows for modifications if needed. This way, we can quickly produce new
objects based on existing ones without going through the costly instantiation process again.

Let's delve into the details of how the refactored code with
the Prototype Design Pattern addresses the issues in the
original code.
📌 Issue 1: Every time we create a new instance of ComplexObject , we have to wait for the costly
operation to complete.

Solution with Prototype Design Pattern: In the refactored code, the clone method is
introduced in the ComplexObject class. This method uses the deepcopy function from the copy
module to create a new instance of the object. The crucial point here is that when we use the
clone method, we bypass the __init__ method of the ComplexObject class. This means that
the costly operation (in our example, the time.sleep(2) ) is not executed when creating a new
object using the clone method.

So, if we have an existing instance of ComplexObject and we want to create another similar
object, we can use the clone method instead of instantiating a new object from scratch. This way,
we avoid the costly operation and save time.

📌 Issue 2: If we want to create a new object that's similar to an existing object but with some
minor changes, we still have to go through the costly instantiation process.

Solution with Prototype Design Pattern: The clone method not only creates a new instance of
the object but also allows for modifications to the cloned object. In our refactored code, the
clone method accepts an optional data parameter. If provided, this new data replaces the data
attribute of the cloned object.

49
This flexibility means that we can quickly produce variations of an existing object. For instance, if
we have an object with the data "Data for Object 1" and we want another object with the data
"Data for Object 2", we don't need to create a new object from scratch. Instead, we can clone the
existing object and modify its data attribute, all while bypassing the costly instantiation process.

📌 Summary: The Prototype Design Pattern, as implemented in the refactored code, provides an
efficient way to create new objects based on existing ones without repeatedly undergoing costly
initialization processes. This not only enhances performance but also offers a more flexible
approach to object creation, especially when objects have minor variations.

Example - 1 - Real-life Use-case:


Imagine you're building a game where players can customize their characters. Each character has
a set of attributes like health, strength, and abilities. Players can create a character, customize it,
and then save that customization as a template. Later, they might want to create a new character
based on a previously saved template but with some modifications.

In this scenario, the Prototype pattern can be used to clone the character template and then apply
the modifications.

class GameCharacter:
def __init__(self, health=100, strength=10, abilities=[]):
self.health = health
self.strength = strength
self.abilities = abilities.copy()

def __str__(self):
return f"Health: {self.health}, Strength: {self.strength}, Abilities: {',
'.join(self.abilities)}"

# Create a character template


warrior_template = GameCharacter(health=150, strength=20, abilities=["slash",
"dash"])

# Register the template


prototype = Prototype()
prototype.register_object("warrior", warrior_template)

# Clone the template and modify


archer = prototype.clone("warrior", health=100, strength=15, abilities=["shoot",
"dash"])
print(archer)

📌 In this example, we first create a GameCharacter class that represents a character in the
game. We then create a warrior template with specific attributes. Using the Prototype pattern, we
can clone this template to create an archer character with some modifications.

📌 The deep copy ensures that the abilities list of the cloned object is independent of the
original object. This means modifying the abilities of the archer won't affect the warrior template.

50
Under-the-hood:

📌 When you perform a deep copy, Python recursively copies


all objects referenced by the original object. This means if
your object contains references to lists, dictionaries, or other
objects, a new separate instance of those will be created.
This ensures that the cloned object is entirely independent
of the original.
📌 The __dict__.update(attrs) method is used to update the attributes of the cloned object.
The __dict__ attribute of an object is a dictionary containing the object's writable attributes.

📌 Using the Prototype pattern can be more efficient than creating new instances from scratch,
especially when the instantiation process is resource-intensive or involves database operations.

In conclusion, the Prototype Design Pattern provides a mechanism to clone objects, ensuring that
the new object is independent of the original. This pattern is particularly useful in scenarios where
object creation is costly, and it's more efficient to copy an existing instance.

Example-2 - Earlier I said Prototype's benefit - "📌 Using the


Prototype pattern can be more efficient than creating new
instances from scratch, especially when the instantiation
process is resource-intensive or involves database
operations."
Let's see a full example of that.

Let's delve into a scenario where the instantiation process is resource-intensive, and see how the
Prototype pattern can offer a more efficient solution.

Scenario: Database Operations


Imagine you have a system where you need to fetch user profiles from a database. Each user
profile contains a significant amount of data, including user details, preferences, transaction
history, etc. Fetching this data from the database every time you need a user profile can be
resource-intensive and slow.

However, in many cases, you might need to create a new user profile based on an existing one,
with only a few modifications. Instead of fetching all the data again from the database, you can
clone the existing profile and make the necessary changes.

Code:
Let's see how this can be implemented:

import copy
import time

class Database:

51
def fetch_user_profile(self, user_id):
# Simulate a time-consuming database operation
time.sleep(2)
return {
"user_id": user_id,
"name": "John Doe",
"preferences": ["reading", "traveling"],
"transaction_history": ["purchase1", "purchase2"]
}

class UserProfile:
def __init__(self, user_id):
db = Database()
user_data = db.fetch_user_profile(user_id)
self.user_id = user_data["user_id"]
self.name = user_data["name"]
self.preferences = user_data["preferences"]
self.transaction_history = user_data["transaction_history"]

def __str__(self):
return f"UserID: {self.user_id}, Name: {self.name}, Preferences:
{self.preferences}, Transaction History: {self.transaction_history}"

class Prototype:
def __init__(self):
self._objects = {}

def register_object(self, name, obj):


self._objects[name] = obj

def clone(self, name, **attrs):


obj = copy.deepcopy(self._objects.get(name))
obj.__dict__.update(attrs)
return obj

# Fetching the original user profile from the database


original_user = UserProfile(1)
print(original_user)

# Register the original user profile with the prototype


prototype = Prototype()
prototype.register_object("user1", original_user)

# Clone the original user profile and make modifications


new_user = prototype.clone("user1", user_id=2, name="Jane Doe")
print(new_user)

📌 In the above code, the Database class simulates a time-consuming operation to fetch a user
profile. The UserProfile class fetches the user profile from the database upon instantiation.

📌 The first time we create a UserProfile object, it fetches the data from the database, which
takes time. However, when we need a new user profile based on the existing one, we use the
Prototype class to clone the original profile and make the necessary modifications. This avoids
the need to hit the database again, saving time and resources.

52
📌 The deep copy ensures that the cloned user profile is independent of the original. This means
that changes to the preferences or transaction_history of the new user won't affect the
original user.

In this scenario, the Prototype pattern provides a more efficient way to create new user profiles
based on existing ones without repeatedly incurring the cost of database operations.

But, let's take a more detailed step-by-step look into HOW


exactly Prototype Design pattern helped here

The Problem:
When you have a system that deals with user profiles, there are often scenarios where you need
to create a new profile that is very similar to an existing one. For instance, you might want to
create a profile for a new user that shares many attributes with an existing user, but with a few
changes.

If you were to approach this without the Prototype pattern, you might end up fetching the entire
profile from the database, making the necessary modifications, and then saving it back. This
approach has a few drawbacks:

1. Database Overhead: Fetching data from a database, especially if it's a remote database, can
be time-consuming and resource-intensive. If you're doing this repeatedly, it can slow down
your application and put unnecessary load on the database.

2. Data Consistency: Every time you fetch data from the database, there's a chance (however
small) that the data might have changed since the last fetch. This can lead to inconsistencies.

The Solution: Using the Prototype Pattern


The Prototype pattern offers a solution to this problem. Instead of fetching the profile from the
database every time, you fetch it once and then clone it for subsequent profiles. This cloned
profile can then be modified as needed.

In the provided code:

1. Initial Fetch: The first time we create a UserProfile object ( original_user ), it fetches the
data from the database. This is done using the Database class's fetch_user_profile
method.

class UserProfile:
def __init__(self, user_id):
db = Database()
user_data = db.fetch_user_profile(user_id)

original_user = UserProfile(1)

2. Registering the Prototype: Once we have the original_user profile, we register it with the
Prototype class. This means we're telling the Prototype class, "Hey, this is a profile I might
want to clone in the future."

53
# Register the original user profile with the prototype
prototype = Prototype()
prototype.register_object("user1", original_user)

3. Cloning the Prototype: When we need a new profile based on the original_user , instead
of going back to the database, we ask the Prototype class to give us a clone of the
original_user . This is done using the clone method of the Prototype class. This method
creates a deep copy of the original_user , ensuring it's a completely independent object.

# Clone the original user profile and make modifications


new_user = prototype.clone("user1", user_id=2, name="Jane Doe")

4. Modifying the Clone: After cloning, we can make any necessary modifications to the new
profile. In the code, we change the user_id and name attributes of the cloned profile.

Benefits:
1. Efficiency: Since we're not hitting the database for every new profile, we save on the time
and resources that would be used in database operations.

2. Consistency: Since we're working with a clone of the original profile, we ensure that the base
data is consistent across all cloned profiles.

3. Flexibility: The Prototype pattern allows us to easily create variations of the original profile.
We can make as many clones as we want and modify each one independently.

In essence, by using the Prototype pattern in this scenario, we're optimizing our system by
reducing the number of database calls and ensuring consistent and flexible user profile creation.

Example-3 - DataFrame and Series Cloning: When performing


operations that don't modify the original DataFrame or
Series but return a new modified object. Using the Prototype
design pattern

Scenario:
Imagine a scenario where we have a factory that produces DataFrames for different departments
in a company. Each department's DataFrame has a standard structure, but the data varies. Instead
of creating a new DataFrame from scratch for each department, we can use a prototype of a
standard DataFrame and clone it for each department, filling in the specific data.

Code Example:

import pandas as pd
import copy

class DataFrameFactory:
def __init__(self, prototype_df):
self.prototype_df = prototype_df

def create_dataframe(self, data):


54
cloned_df = copy.deepcopy(self.prototype_df)
for column in cloned_df.columns:
if column in data:
cloned_df[column] = data[column]
return cloned_df

# Define a prototype DataFrame with standard structure


prototype_df = pd.DataFrame({
'Name': [None],
'Role': [None],
'Salary': [None]
})

factory = DataFrameFactory(prototype_df)

# Data for different departments


hr_data = {
'Name': ['Alice', 'Bob'],
'Role': ['HR Manager', 'HR Executive'],
'Salary': [70000, 50000]
}

finance_data = {
'Name': ['Charlie', 'David'],
'Role': ['Finance Manager', 'Accountant'],
'Salary': [75000, 52000]
}

hr_df = factory.create_dataframe(hr_data)
finance_df = factory.create_dataframe(finance_data)

print("HR DataFrame:")
print(hr_df)

print("\nFinance DataFrame:")
print(finance_df)

Explanation:
📌 We start by defining a DataFrameFactory class. This class will be responsible for producing
DataFrames based on a prototype.

📌 The DataFrameFactory class has a method create_dataframe that takes data as input. It
clones the prototype DataFrame and fills it with the provided data.

📌 We define a prototype DataFrame prototype_df with a standard structure that includes


columns 'Name', 'Role', and 'Salary', but with placeholder data.

📌 Using the factory, we create DataFrames for the HR and Finance departments by providing
their specific data.

📌 The Prototype design pattern is evident here in the way we use a prototype DataFrame and
clone it for different departments, rather than creating a new DataFrame from scratch each time.

55
This approach ensures that all department DataFrames have a consistent structure, as defined by
the prototype, while allowing for flexibility in the data they contain. It also provides efficiency
benefits, especially if the prototype DataFrame had more complex structures or default data that
we wanted to preserve across clones.

Example- 4 - "Extension Modules creation with Pandas using


Prototype Design pattern:
pandas has various extension arrays and types. When creating custom extension arrays or types,
one might need to ensure that they can be easily cloned or copied without re-initializing the entire
data structure. The Prototype pattern can be handy here."

Scenario:
Suppose you're working with financial data in pandas, and you want to create a custom extension
array to handle monetary values in multiple currencies. This custom array should be able to store
values like "10 USD", "15 EUR", etc., and provide functionality to convert between currencies.

To make this efficient, especially when creating new monetary arrays based on existing ones,
you'd want to use the Prototype design pattern to clone and modify arrays without re-initializing
the entire data structure.

Code Implementation:

import pandas as pd
import copy
from pandas.api.extensions import ExtensionArray

class MonetaryArray(ExtensionArray):
def __init__(self, data):
self.data = list(data)

def __len__(self):
return len(self.data)

def __getitem__(self, index):


return self.data[index]

def __setitem__(self, key, value):


self.data[key] = value

def __repr__(self):
return f"MonetaryArray({self.data})"

# Implementing the Prototype pattern


def clone(self, **modifications):
cloned_array = copy.deepcopy(self)
for index, value in modifications.items():
cloned_array[index] = value
return cloned_array

# Sample MonetaryArray
arr = MonetaryArray(["10 USD", "15 EUR", "20 GBP"])
56
print("Original Array:", arr)

# Clone and modify the MonetaryArray


modified_arr = arr.clone(**{1: "25 EUR", 2: "30 GBP"})
print("Modified Array:", modified_arr)

Explanation:
📌 We start by defining a MonetaryArray class that extends the ExtensionArray from pandas.
This class will represent our custom extension array for monetary values.

📌 The MonetaryArray class has basic methods to handle the data, such as __len__ ,
__getitem__ , and __setitem__ .

📌 The clone method in the MonetaryArray class is where the Prototype design pattern is
implemented. This method creates a deep copy of the current array and then applies any
modifications provided as arguments. This allows us to create new monetary arrays based on
existing ones without re-initializing the entire data structure.

📌 In the sample usage, we create an original MonetaryArray with some values. We then clone
this array and modify some of its values using the clone method.

By using the Prototype design pattern in this scenario, we can efficiently create and modify
custom extension arrays in pandas, ensuring flexibility and performance.

MonetaryArray class have to create all basic


1. Why did the
methods to handle the data, such as __len__ ,
__getitem__ , and __setitem__ ?
When creating a custom extension array in pandas, you're essentially defining a new type of array-
like structure. The pandas framework expects this structure to behave like a typical Python
sequence (like a list or a tuple). To ensure this behavior, you need to implement certain "magic" or
"dunder" methods:

📌 __len__ : This method returns the length of the array. It's used whenever the built-in len()
function is called on an instance of the MonetaryArray .

📌 __getitem__ : This method allows for indexing into the array. For instance, if you want to
retrieve the value at the second position of the array ( arr[1] ), this method is called.

📌 __setitem__ : This method allows for setting the value at a specific index in the array. For
example, if you want to update the value at the second position ( arr[1] = "25 EUR" ), this
method is invoked.

By defining these methods, the MonetaryArray can be used seamlessly within the pandas
ecosystem, and it behaves as expected when used in typical Python scenarios.

57
2. How the Prototype design pattern is being used here, and
how does it help?
Let's break down the usage of the Prototype design pattern in the given example:

📌 Step 1: Define the Prototype


In our case, the MonetaryArray class itself acts as the prototype. Any instance of this class can be
cloned to create a new, separate instance.

arr = MonetaryArray(["10 USD", "15 EUR", "20 GBP"])

Here, arr is an instance of our prototype.

📌 Step 2: Cloning the Prototype


The clone method in the MonetaryArray class is responsible for creating a clone of the current
instance:

def clone(self, **modifications):


cloned_array = copy.deepcopy(self)
for index, value in modifications.items():
cloned_array[index] = value
return cloned_array

Here's what happens:

We use copy.deepcopy(self) to create a deep copy of the current instance. This ensures
that the cloned array is entirely independent of the original.

In Python, self is a convention for referring to an instance of a class from within the class
itself. When you create an instance of a class, self in any method of that class refers to the
current instance. It's automatically passed as the first argument to instance methods.

When you call the clone method on an instance of MonetaryArray , the self inside that
method refers to the instance on which the method was called.

For example, if you have:

arr = MonetaryArray(["10 USD", "15 EUR", "20 GBP"])

And then you call:

modified_arr = arr.clone(**{1: "25 EUR", 2: "30 GBP"})

Inside the clone method, self refers to arr .

Now, the line cloned_array = copy.deepcopy(self) is using Python's copy module to create a
deep copy of self (which, in this context, is arr ).

We then apply any modifications provided to the cloned array. This is done using the
__setitem__ method we defined earlier.

Finally, the modified clone is returned.

📌 Step 3: Using the Cloned Instance


58
When we call the clone method:

modified_arr = arr.clone(**{1: "25 EUR", 2: "30 GBP"})

We get a new MonetaryArray instance ( modified_arr ) that is based on the original ( arr ) but
with the specified modifications.

Benefits of Using the Prototype Design Pattern Here:


1. Efficiency: Instead of creating a new MonetaryArray from scratch and populating it with
data (which might involve complex logic or computations), we simply clone an existing one
and make the necessary modifications. This can be more efficient, especially if the
initialization process is resource-intensive.

2. Consistency: By cloning from a prototype, we ensure that all instances of MonetaryArray


have a consistent structure and behavior. This can be crucial in scenarios where consistency
across instances is vital.

3. Flexibility: The Prototype pattern allows us to easily create variations of the original object.
We can make as many clones as we want and modify each one independently, providing a lot
of flexibility in how we use and manage these objects.

Earler I said "We apply any modifications provided to the


cloned array using the __setitem__ method we defined
earlier."
But we could not see where in the code exactly the __setitem__ method was used?

In the clone method of the MonetaryArray class, we have the following lines:

for index, value in modifications.items():


cloned_array[index] = value

Here, we're iterating over the modifications dictionary, which contains indices as keys and new
values as values. For each key-value pair, we're updating the cloned_array with the new value at
the specified index.

When we use the line cloned_array[index] = value , it's essentially a shorthand for calling the
__setitem__ method on the cloned_array object. In other words, the above line is equivalent
to:

cloned_array.__setitem__(index, value)

So, even though we don't explicitly call the __setitem__ method in the clone method, it's
implicitly invoked when we use the assignment operation on the cloned_array with an index.

In Python, special methods like __setitem__ allow us to define custom behaviors for built-in
operations. In this case, the __setitem__ method we defined for the MonetaryArray class allows
us to customize how values are set at specific indices in our custom array.

59
The concept of using the __setitem__ method applies
broadly in Python, but with some nuances. Let's delve
deeper.
📌 Simply put, In Python dictionaries, the __setitem__ method is used to set a key-value pair.
When you do something like my_dict[key] = value , under the hood, Python is calling
my_dict.__setitem__(key, value) .

In Python, many built-in operations or syntactic sugar are backed by special methods (often
referred to as "magic" or "dunder" methods because they have double underscores at the
beginning and end). When you use these built-in operations on objects, the corresponding special
methods are implicitly called.

For the assignment operation using indexing (i.e., obj[index] = value ), the special method that
gets invoked is __setitem__ .

Here's a breakdown of how this works for various objects:

1. Lists: When you do something like my_list[2] = 'value' , you're implicitly calling the
__setitem__ method of the list object.

2. Dictionaries: For dictionaries, my_dict[key] = 'value' is also an implicit call to the


__setitem__ method of the dictionary object.

3. Custom Objects: If you define a custom class and implement the __setitem__ method,
then instances of that class will also use this method when the assignment operation with
indexing is used.

4. Objects without __setitem__ : If an object doesn't have a __setitem__ method


implemented and you try to use the assignment operation with indexing, you'll get an error.
For instance, tuples in Python are immutable, so they don't have a __setitem__ method.
Trying to assign a value using indexing on a tuple will raise a TypeError .

Here's a simple example to illustrate this:

class Example:
def __setitem__(self, index, value):
print(f"Setting value at index {index} to {value}")

obj = Example()
obj[1] = "Hello"

When you run the above code, it will print: Setting value at index 1 to Hello .

This demonstrates that the __setitem__ method was called when we used the assignment
operation with indexing on our custom object.

In summary, the behavior of obj[index] = value being a shorthand for


obj.__setitem__(index, value) is consistent across Python objects that support indexed
assignment. However, not all objects support this operation, and whether or not they do is
determined by the presence of the __setitem__ method.

60
What exactly is pandas ExtensionArray that I used in the
above example
Within pandas , the ExtensionArray is a class that provides a way to store custom data types not
natively supported by pandas . It's a part of the pandas extension system.

Here's a brief overview of ExtensionArray :

1. Custom Data Types: Before the introduction of the extension system, if you wanted to use a
custom data type with pandas , you had to use Python objects, which are slow and memory-
inefficient. With the ExtensionArray , you can create custom data types that are efficient and
can be used just like native pandas data types.

2. Interface: The ExtensionArray is essentially an interface that you need to implement. It


defines a set of methods that your custom array must implement to be used within pandas .

3. Use Cases: Some of the use cases for ExtensionArray include:

Storing arrays of custom objects efficiently.

Creating arrays with custom NA (missing value) representations.

Implementing custom operations for data types not natively supported by pandas .

4. Examples: pandas itself uses the ExtensionArray interface for some of its own data types.
For instance, the Categorical data type in pandas is backed by an ExtensionArray .

5. Integration with DataFrames and Series: Once you've defined an ExtensionArray for
your custom data type, you can use it within pandas DataFrames and Series just like any
other data type. This means you can perform operations, indexing, slicing, etc., on your
custom data type seamlessly.

6. Performance: One of the main benefits of using ExtensionArray is performance. By


defining custom data types with ExtensionArray , you can achieve performance that's
comparable to native pandas data types.

In summary, the ExtensionArray in pandas provides a way to extend the capabilities of pandas
to support custom data types efficiently. If you're interested in creating a custom data type for use
in pandas , the ExtensionArray is the place to start.

Obviously, you can use copy.deepcopy without creating a Prototype. But having the Prototype
allows you to work with all the copies from the same code, so you can add exceptions, log
messages, or whatever you want when copying without altering all the classes.

Applicability
Use the Prototype pattern when you have a lot of objects to copy.

Use the Prototype pattern when you want to be able to copy objects at runtime with being
able to modify their attributes.

Use the Prototype pattern when you don't want the copy method to be dependent on the
implementation of the classes.

61
Advantages
Easy implementation: unlike all the creational patterns, it's easy to implement a Prototype
and it doesn't require a lot of classes.

More code flexibility: because you can alter the values of the objects you want to copy, so you
don't have to create tons of subclasses.

Disadvantage
The main disadvantage is that it can make the code more demanding if you don't work with a lot
of objects. So, for small projects, it's better not to use this pattern.

🐍🚀 The Abstract Factory pattern in Python 🐍


🚀

The Abstract Factory pattern is a creational design pattern that provides an interface for creating
families of related or dependent objects without specifying their concrete classes. It's particularly
useful when a system needs to maintain flexibility and scalability. Let's delve into its main
principles:

📌 Interface for Creating Families of Objects: The primary role of an Abstract Factory is to
declare an interface for creating products. Each "product" is a member of a "family," and these
families are meant to be used together.

📌 Concrete Factories Implement the Interface: Concrete factories implement this interface to
produce objects that conform to a family. These factories take care of the instantiation of the
family of objects.

📌 Products Share a Common Interface: Within each family, the products share a common
interface. This ensures that the family of objects created by the factory are interchangeable and
can work together seamlessly.
62
📌 Client Code is Isolated from Concrete Products: The client code interacts solely with the
abstract factory and the abstract products, thereby isolating itself from the concrete classes. This
adheres to the Dependency Inversion Principle, which states that high-level modules should not
depend on low-level modules; both should depend on abstractions.

📌 Ease of Extensibility: To add a new family of products, you typically need to create a new
concrete factory and implement the abstract factory interface. This makes the system highly
extensible.

📌 Consistency Among Products: Since a factory is responsible for creating an entire family of
related products, it's easier to ensure that these products will function correctly together.

📌 Separation of Concerns: The pattern separates the code for complex object creation from the
code that actually uses these objects. This makes the codebase easier to manage and test.

📌 Single Responsibility Principle: Each concrete factory is responsible for creating objects of a
single family but can create as many objects from that family as needed. This aligns with the Single
Responsibility Principle, which states that a class should have only one reason to change.

📌 Open/Closed Principle: The system is open for extension but closed for modification. You can
introduce new types of products or families by adding new concrete factory classes, without
altering existing code. This adheres to the Open/Closed Principle, which suggests that software
entities should be open for extension but closed for modification.

In summary, the Abstract Factory pattern is a robust architectural pattern that helps manage
object creation complexity, promotes consistency among objects, and facilitates a high level of
flexibility and extensibility. It does so by decoupling the client code that needs some objects from
the classes that actually produce those objects.

Let's see an example WITH and then WITHOUT the Abstract


Factory pattern in Python.
Alright, let's dive into the Abstract Factory pattern in Python.

1. Code without the Abstract Factory pattern

Consider a GUI library that provides buttons and checkboxes. If we want to support multiple
themes (e.g., Windows and MacOS), without the Abstract Factory pattern, we might do something
like this:

class WindowsButton:
def render(self):
return "Rendering a Windows style button"

class MacOSButton:
def render(self):
return "Rendering a MacOS style button"

class WindowsCheckbox:
def render(self):
return "Rendering a Windows style checkbox"

class MacOSCheckbox:
63
def render(self):
return "Rendering a MacOS style checkbox"

def create_ui(theme):
if theme == "Windows":
button = WindowsButton()
checkbox = WindowsCheckbox()
elif theme == "MacOS":
button = MacOSButton()
checkbox = MacOSCheckbox()
else:
raise ValueError("Unknown theme")

print(button.render())
print(checkbox.render())

create_ui("Windows")
create_ui("MacOS")

📌 Issues with the above approach:


📌 The create_ui function is directly responsible for creating objects of buttons and checkboxes.
This violates the Single Responsibility Principle.

📌 If we want to add support for another theme, we have to modify the create_ui function,
which violates the Open/Closed Principle.

📌 The system is not scalable. For every new widget or theme, we have to modify existing code.

2. Refactored Code using the Abstract Factory pattern

To solve the above issues, we'll introduce the Abstract Factory pattern:

from abc import ABC, abstractmethod

# Abstract Factory and its concrete implementations


class GUIFactory(ABC):
@abstractmethod
def create_button(self):
pass

@abstractmethod
def create_checkbox(self):
pass

class WindowsFactory(GUIFactory):
def create_button(self):
return WindowsButton()

def create_checkbox(self):
return WindowsCheckbox()

class MacOSFactory(GUIFactory):
def create_button(self):
return MacOSButton()
64
def create_checkbox(self):
return MacOSCheckbox()

# Abstract products and their concrete implementations remain the same


# ... [WindowsButton, MacOSButton, WindowsCheckbox, MacOSCheckbox classes here]

def create_ui(factory: GUIFactory):


button = factory.create_button()
checkbox = factory.create_checkbox()

print(button.render())
print(checkbox.render())

# Client code
windows_factory = WindowsFactory()
create_ui(windows_factory)

macos_factory = MacOSFactory()
create_ui(macos_factory)

📌 Advantages of the refactored approach:


📌 The creation of objects is abstracted away from the main logic, adhering to the Single
Responsibility Principle.

📌 New themes can be added without modifying the existing code, adhering to the Open/Closed
Principle.

📌 The system is now more scalable. For every new widget or theme, we just need to add a new
factory without modifying the existing factories or products.

In conclusion, the Abstract Factory pattern provides a way to encapsulate a group of individual
factories that have a common theme without specifying their concrete classes. This promotes
code organization, scalability, and adherence to SOLID principles.

Let's delve deeper into how the refactored code with the
Abstract Factory pattern addresses the issues of the original
code.

📌 Issue 1: The create_ui function in the original code was directly responsible for creating
objects of buttons and checkboxes, violating the Single Responsibility Principle.

Solution with Abstract Factory: In the refactored code, the responsibility of creating objects is
shifted from the create_ui function to the factories. The create_ui function now only needs to
know about the abstract factory ( GUIFactory ) and doesn't concern itself with the concrete
implementations. This means the function has a single responsibility: to use the factory to create
and render UI components.

65
📌 Issue 2: In the original code, if we wanted to add support for another theme, we had to modify
the create_ui function, violating the Open/Closed Principle.

Solution with Abstract Factory: With the Abstract Factory pattern, adding support for a new
theme (e.g., "Linux") would involve creating a new factory (e.g., LinuxFactory ) that implements
the GUIFactory interface. The create_ui function remains unchanged. This means the existing
code is closed for modification but open for extension, adhering to the Open/Closed Principle.

📌 Issue 3: The original system was not scalable. For every new widget or theme, we had to
modify existing code.

Solution with Abstract Factory: The Abstract Factory pattern promotes scalability in multiple
ways:

1. Adding a new theme: As mentioned above, to support a new theme, we simply introduce a
new factory without touching existing factories or the main UI creation logic.

2. Adding a new UI component: If we want to add a new UI component (e.g., a slider), we


would:

Add a method in the GUIFactory abstract class (e.g., create_slider ).

Implement this method in all concrete factories (e.g., WindowsFactory , MacOSFactory ).

Create concrete implementations for the new component for each theme (e.g.,
WindowsSlider , MacOSSlider ).

Even in this case, the main UI creation logic ( create_ui function) remains untouched. It
would only change if we want to utilize the new component in the UI.

In essence, the Abstract Factory pattern decouples the creation of objects from the main logic,
ensuring that each part of the code adheres to the Single Responsibility Principle. This decoupling
also ensures that the system remains scalable and extensible, allowing for the easy addition of
new themes or components without major code changes.

Example 1 of Abstract Factory


Web and intranet are two different applications. Both use Sql And No SQL databases, Web uses
mongodb and SQL but intranet uses Oracle and OrientDB. Both have different implementations.

# Abstract Factory Design Principle


from abc import ABC, abstractmethod

class db_factory(ABC):
@abstractmethod
def create_no_sql_db(self):
pass
@abstractmethod
def create_sql_db(self):
pass

class web_factory(db_factory):
def create_no_sql_db(self):
return mongodb()
def create_sql_db(self):
66
return SQL()

class intranet_factory(db_factory):
def create_no_sql_db(self):
return orientdb()
def create_sql_db(self):
return Oracle()

class sql_database(ABC):
@abstractmethod
def save(self):
pass

@abstractmethod
def select(self):
pass

class SQL(sql_database):
def save(self):
print ("SQL save called.")
def select(self):
print("SQL select called.")

class Oracle(sql_database):
def save(self):
print ("Oracle save called.")
def select(self):
print("Oracle select called")

class no_sql_database(ABC):
@abstractmethod
def insert(self):
pass
@abstractmethod
def get_object(self):
pass

class mongodb(no_sql_database):
def insert(self):
print("mongodb insert called.")
def get_object(self):
print("mongodb get_object called.")

class orientdb(no_sql_database):
def insert(self):
print("orientdb insert called.")
def get_object(self):
print("orientdb get_object called.")

class client:
def get_database(self):
abs_factory = web_factory()
67
sql_factory = abs_factory.create_sql_db()
sql_factory.save()
sql_factory.select()

# -------------------------------------------
abs_factory = web_factory()
sql_factory = abs_factory.create_no_sql_db()
sql_factory.insert()
sql_factory.get_object()

# -------------------------------------------
abs_factory = intranet_factory()
ora_factory = abs_factory.create_sql_db()
ora_factory.save()
ora_factory.select()

# -------------------------------------------
abs_factory = intranet_factory()
ora_factory = abs_factory.create_no_sql_db()
ora_factory.insert()
ora_factory.get_object()

client = client()
client.get_database()

SQL save called.


SQL select called.
mongodb insert called.
mongodb get_object called.
Oracle save called.
Oracle select called
orientdb insert called.
orientdb get_object called.

Above is an implementation of the Abstract Factory design pattern. This pattern allows you to
produce families of related or dependent objects without specifying their concrete classes. Let's
break down the code.

📌 The db_factory class is an abstract class that serves as the Abstract Factory. It declares two
abstract methods: create_no_sql_db and create_sql_db . These methods are intended to
create objects that conform to the no_sql_database and sql_database interfaces, respectively.

📌 The web_factory and intranet_factory classes inherit from db_factory . These are
Concrete Factories. They implement the abstract methods and return instances of concrete
classes ( mongodb , SQL , Oracle , orientdb ) that implement the no_sql_database or
sql_database interfaces.

📌 The sql_database and no_sql_database classes are abstract classes that define the
interface for SQL and NoSQL databases. They declare methods like save , select , insert , and
get_object that must be implemented by any concrete classes.

68
📌 The SQL , Oracle , mongodb , and orientdb classes are Concrete Products. They implement
the sql_database and no_sql_database interfaces and provide the actual implementation for
the methods declared in those interfaces.

📌 The client class demonstrates how to use these factories and products. It creates instances
of web_factory and intranet_factory , uses them to create database objects, and then calls
methods on those objects.

The Abstract Factory pattern is particularly useful when the system needs to be independent of
how its objects are created, composed, and represented, and the system is configured with
multiple families of objects.

For Example 1 above - How does it adheres to the principles


and requirements of the Abstract Factory design pattern in
several ways:
📌 Interface for Creating Families of Objects: The db_factory class serves as the Abstract
Factory, declaring an interface ( create_no_sql_db and create_sql_db ) for creating families of
database objects. These families are SQL and NoSQL databases.

📌 Concrete Factories Implement the Interface: The web_factory and intranet_factory


classes are Concrete Factories that implement the db_factory interface. They produce objects
that belong to the SQL and NoSQL families, specifically tailored for web and intranet applications.

📌 Products Share a Common Interface: The sql_database and no_sql_database abstract


classes define common interfaces for all SQL and NoSQL databases, respectively. Concrete classes
like SQL , Oracle , mongodb , and orientdb implement these interfaces, ensuring that the
products are interchangeable within their respective families.

📌 Client Code Isolated from Concrete Products: The client class interacts only with the
abstract factory ( db_factory ) and the product interfaces ( sql_database and no_sql_database ).
It doesn't instantiate concrete classes directly, adhering to the Dependency Inversion Principle.

📌 Ease of Extensibility: Adding a new type of database or a new application context (like
mobile_factory for mobile apps) would involve creating new concrete factories and products
that implement the existing interfaces. This makes the system highly extensible without modifying
existing code.

📌 Consistency Among Products: The web_factory and intranet_factory ensure that the
SQL and NoSQL databases they create are consistent and can work seamlessly within their
respective application contexts (web or intranet).

📌 Separation of Concerns: The object creation logic is separated from the business logic. The
client class doesn't need to know how to instantiate database objects; it delegates this
responsibility to the abstract factory, making the code easier to manage and test.

📌 Single Responsibility Principle: Each concrete factory ( web_factory , intranet_factory ) is


responsible for creating objects of a single family (either web or intranet databases), adhering to
the Single Responsibility Principle.

📌 Open/Closed Principle: The design is open for extension but closed for modification. New
types of databases or application contexts can be added without altering the existing factories or
product interfaces.
69
By adhering to these principles, the code maintains a high level of modularity, flexibility, and
extensibility, which are key benefits of the Abstract Factory pattern.

Whats the architectural difference between the


regular Factory pattern in Python and the Abstract
Factory pattern in Python
The Factory pattern and the Abstract Factory pattern are both creational design patterns, but they
address different problems and have distinct architectural differences. Let's explore these
differences:

📌 Purpose: - Factory Pattern: It deals with the problem of creating objects without specifying
the exact class of object that will be created. It defines an interface for creating an instance of a
class, with its subclasses deciding which class to instantiate. - Abstract Factory Pattern: It
addresses the problem of creating families of related or dependent objects without specifying
their concrete classes. It provides an interface for creating families of related or dependent
objects.

📌 Number of Abstract Classes: - Factory Pattern: Typically involves a single creator class
(Factory) and a single product class or interface. - Abstract Factory Pattern: Involves multiple
Factory classes and multiple Product classes or interfaces. The pattern defines an interface for
creating several related or dependent objects.

📌 Level of Abstraction: - Factory Pattern: It's about creating objects. The main focus is on
using a method to produce instances of one class, without specifying the exact class. - Abstract
Factory Pattern: It's about creating families of related objects. The main focus is on providing a
way to produce families of related objects without having to specify concrete classes.

📌 Implementation: - Factory Pattern: Often involves a method (static or instance) that, based
on input or configuration, creates and returns instances of one of several possible classes. -
Abstract Factory Pattern: Involves multiple Factory methods, each responsible for creating a
different kind of object. The client interacts with the abstract factory to get the objects, ensuring
that it gets a family of related objects.

📌 Extensibility: - Factory Pattern: To add a new type of product, you might need to modify the
factory method logic or extend the factory class. - Abstract Factory Pattern: To introduce a new
family of products, you can add a new concrete factory without modifying existing code, adhering
to the Open/Closed Principle.

📌 Use Cases: - Factory Pattern: Best suited when there's a need to manage and maintain
objects of one particular type, and the exact type might be decided at runtime. - Abstract Factory
Pattern: Useful when the system needs to be independent of how its objects are created,
composed, and represented, and the system is configured with multiple families of objects.

In essence, while both patterns deal with object creation, the Factory pattern focuses on a single
product, whereas the Abstract Factory pattern emphasizes a family of products. The Abstract
Factory pattern can be seen as a higher-level abstraction of the Factory pattern.

70
Example-2 Real-life Use-Case Code
Let's consider a real-life scenario where you have different types of payment gateways like Stripe
and PayPal, and each gateway has different types of payments like one-time and subscription.

from abc import ABC, abstractmethod

class PaymentFactory(ABC):
@abstractmethod
def create_one_time_payment(self):
pass
@abstractmethod
def create_subscription_payment(self):
pass

class StripeFactory(PaymentFactory):
def create_one_time_payment(self):
return StripeOneTime()
def create_subscription_payment(self):
return StripeSubscription()

class PayPalFactory(PaymentFactory):
def create_one_time_payment(self):
return PayPalOneTime()
def create_subscription_payment(self):
return PayPalSubscription()

class Payment(ABC):
@abstractmethod
def process(self):
pass

class StripeOneTime(Payment):
def process(self):
print("Processing one-time payment through Stripe.")

class StripeSubscription(Payment):
def process(self):
print("Processing subscription through Stripe.")

class PayPalOneTime(Payment):
def process(self):
print("Processing one-time payment through PayPal.")

class PayPalSubscription(Payment):
def process(self):
print("Processing subscription through PayPal.")

class Client:
def make_payment(self, factory_type):
factory = factory_type()
one_time = factory.create_one_time_payment()
subscription = factory.create_subscription_payment()

one_time.process()
71
subscription.process()

client = Client()
client.make_payment(StripeFactory)
client.make_payment(PayPalFactory)

📌 In this example, PaymentFactory is the Abstract Factory with methods


create_one_time_payment and create_subscription_payment .

📌 StripeFactory and PayPalFactory are Concrete Factories. They implement the abstract
methods and return instances of concrete classes ( StripeOneTime , StripeSubscription ,
PayPalOneTime , PayPalSubscription ) that implement the Payment interface.

📌 The Payment interface declares a process method, which is implemented by all concrete
payment types.

📌 The Client class demonstrates how to use these factories. It takes a factory type as an
argument, creates a factory object, and then uses it to create payment objects.

The Abstract Factory pattern allows you to switch easily between different families of related
objects (Stripe and PayPal in this case) by changing just the factory type. This makes the system
more modular, easier to extend, and easier to maintain.

Example 3 - Real life use case of Design Pattern in Python


let's consider a scenario involving GUI (Graphical User Interface) elements. Imagine you're building
a cross-platform application that needs to run on both Windows and MacOS. Each OS has its own
look and feel for GUI elements like buttons, checkboxes, and windows.

Using the Abstract Factory pattern, you can ensure that your application uses the correct GUI
elements for the OS it's running on without hardcoding specific classes.

Here's a Python representation of this scenario:

from abc import ABC, abstractmethod

# Abstract Factory
class GUIFactory(ABC):
@abstractmethod
def create_button(self):
pass

@abstractmethod
def create_checkbox(self):
pass

@abstractmethod
def create_window(self):
pass

# Concrete Factory 1: Windows GUI elements


class WindowsGUIFactory(GUIFactory):
def create_button(self):
return WindowsButton()

72
def create_checkbox(self):
return WindowsCheckbox()

def create_window(self):
return WindowsWindow()

# Concrete Factory 2: MacOS GUI elements


class MacOSGUIFactory(GUIFactory):
def create_button(self):
return MacOSButton()

def create_checkbox(self):
return MacOSCheckbox()

def create_window(self):
return MacOSWindow()

# Abstract Product A: Button


class Button(ABC):
@abstractmethod
def paint(self):
pass

# Concrete Product A1
class WindowsButton(Button):
def paint(self):
print("Rendering a button in Windows style.")

# Concrete Product A2
class MacOSButton(Button):
def paint(self):
print("Rendering a button in MacOS style.")

# Abstract Product B: Checkbox


class Checkbox(ABC):
@abstractmethod
def paint(self):
pass

# Concrete Product B1
class WindowsCheckbox(Checkbox):
def paint(self):
print("Rendering a checkbox in Windows style.")

# Concrete Product B2
class MacOSCheckbox(Checkbox):
def paint(self):
print("Rendering a checkbox in MacOS style.")

# Abstract Product C: Window


class Window(ABC):
@abstractmethod
def paint(self):
pass

73
# Concrete Product C1
class WindowsWindow(Window):
def paint(self):
print("Rendering a window in Windows style.")

# Concrete Product C2
class MacOSWindow(Window):
def paint(self):
print("Rendering a window in MacOS style.")

# Client code
class Application:
def __init__(self, factory: GUIFactory):
self.button = factory.create_button()
self.checkbox = factory.create_checkbox()
self.window = factory.create_window()

def paint(self):
self.button.paint()
self.checkbox.paint()
self.window.paint()

# Depending on the OS, you'd instantiate the appropriate factory


factory = WindowsGUIFactory() # or MacOSGUIFactory()
app = Application(factory)
app.paint()

📌 In this example, GUIFactory is the Abstract Factory that declares methods for creating a
family of GUI elements (buttons, checkboxes, windows).

📌 WindowsGUIFactory and MacOSGUIFactory are Concrete Factories that implement the


Abstract Factory interface and produce GUI elements tailored for Windows and MacOS,
respectively.

📌 Button , Checkbox , and Window are abstract products, and their concrete implementations
( WindowsButton , MacOSButton , etc.) define how these elements should look and behave on each
OS.

📌 The Application class, which represents the client in this scenario, uses the factory to create
and interact with GUI elements. Depending on which factory is provided (Windows or MacOS), the
application will render the appropriate GUI elements.

This design ensures that the application remains decoupled from the specific GUI elements of an
OS, making it easier to add support for new OSs in the future.

Let's break down how the provided GUI elements example adheres to the principles and
requirements of the Abstract Factory design pattern:

📌 Interface for Creating Families of Objects: - The GUIFactory class serves as the Abstract
Factory. It declares an interface ( create_button , create_checkbox , and create_window ) for
creating a family of GUI elements. These families are the GUI components tailored for different
operating systems.

74
# Abstract Factory
class GUIFactory(ABC):
@abstractmethod
def create_button(self):
pass

@abstractmethod
def create_checkbox(self):
pass

@abstractmethod
def create_window(self):
pass

📌 Concrete Factories Implement the Interface: - The WindowsGUIFactory and


MacOSGUIFactory classes are Concrete Factories. They implement the GUIFactory interface,
producing objects that belong to the GUI family tailored for Windows and MacOS, respectively.

Concrete Factory 1: Windows GUI


elements
class WindowsGUIFactory(GUIFactory): def create_button(self): return WindowsButton()

def create_checkbox(self):
return WindowsCheckbox()

def create_window(self):
return WindowsWindow()

Concrete Factory 2: MacOS GUI elements


class MacOSGUIFactory(GUIFactory): def create_button(self): return MacOSButton()

def create_checkbox(self):
return MacOSCheckbox()

def create_window(self):
return MacOSWindow()

📌 Products Share a Common Interface: - The Button , Checkbox , and Window abstract classes
define common interfaces for all GUI elements of their type. Concrete classes like WindowsButton ,
MacOSButton , WindowsCheckbox , MacOSCheckbox , etc., implement these interfaces. This ensures
that the GUI elements are interchangeable within their respective families, and the application can
use them without knowing their concrete implementations.

75
# Concrete Product A1
class WindowsButton(Button):
def paint(self):
print("Rendering a button in Windows style.")

# Concrete Product A2
class MacOSButton(Button):
def paint(self):
print("Rendering a button in MacOS style.")

📌 Client Code Isolated from Concrete Products: - The Application class (acting as the client)
interacts only with the abstract factory ( GUIFactory ) and the product interfaces ( Button ,
Checkbox , Window ). It doesn't instantiate concrete classes directly, ensuring a decoupling from
the specific GUI elements of an OS.

📌 Ease of Extensibility: - To introduce a new OS or a new type of GUI element, you'd create new
concrete factories and products that implement the existing interfaces. This design ensures that
the system remains extensible without modifying existing code. For instance, adding support for a
Linux GUI would involve creating a LinuxGUIFactory and associated concrete products like
LinuxButton .

📌 Consistency Among Products: - The WindowsGUIFactory and MacOSGUIFactory ensure that


the GUI elements they create are consistent and can work seamlessly within their respective OS
contexts. This ensures that the look and feel of the application remain consistent across its
components.

📌 Separation of Concerns: - The object creation logic is separated from the business logic. The
Application class doesn't need to know how to instantiate GUI elements; it delegates this
responsibility to the abstract factory. This separation makes the codebase easier to manage, test,
and extend.

📌 Single Responsibility Principle: - Each concrete factory ( WindowsGUIFactory ,


MacOSGUIFactory ) is responsible for creating objects of a single family (either Windows or MacOS
GUI elements). This ensures that each class has a single reason to change, adhering to the Single
Responsibility Principle.

📌 Open/Closed Principle: - The design is open for extension but closed for modification. New
types of GUI elements or new OS support can be added without altering the existing factories or
product interfaces. This ensures that the system remains adaptable to future requirements
without necessitating changes to established code.

In summary, the provided GUI elements example demonstrates a well-structured use of the
Abstract Factory pattern. It ensures that the application remains decoupled from specific GUI
implementations, promotes consistency across GUI components, and maintains a high level of
modularity and extensibility.

76
🐍🚀 The builder design pattern in Python 🐍🚀

It is useful for managing objects that consist of multiple parts that need to be implemented
sequentially. By decoupling the construction of an object and its representation, the builder
pattern allows us to reuse a construction multiple times.

Imagine that we want to create an object that is composed of multiple parts and the composition
needs to be done step by step. The object is not complete unless all its parts are fully created.
That's where the builder-design pattern can help us. The builder pattern separates the
construction of a complex object from its representation. By keeping the construction separate
from the representation, the same construction can be used to create several different
representations

📌 When might we require this design pattern? Envision a scenario where object generation
involves a series of steps and consists of nested components with various data types. In such
contexts, the builder design pattern proves invaluable, allowing us to navigate this intricate task
efficiently.

📌 Many design patterns are aptly named from a linguistic perspective, and the builder design
pattern stands as a testament to this. The term "build" is noteworthy. The emphasis is on the
"building" aspect rather than merely "creating." The primary focus of this pattern revolves around
the object's creation process.

📌 Visualizing the builder design pattern can be likened to an assembly line. In this analogy, the
focus is on the assembly rather than the components. The assembly orchestrates the culmination
of the end product, irrespective of the specific parts utilized. Depending on design configurations,
varying outcomes can be produced from the same line, highlighting the importance of effective
abstraction.

📌 We're spared the task of recalling all property names, structures, data types, and routes during
object instantiation. The Builder pattern abstracts this procedure. As a result, we can sidestep the
intricate specifics of any part of a multifaceted object.

📌 The Builder pattern may bear a resemblance to factory patterns, yet they diverge. The Builder
oversees the object's creation journey. In contrast, Factory or Abstract Factory design patterns
assume the role of object generation. These patterns might, for instance, employ the Builder
Pattern to steer the creation process.

77
Use Cases and Explanations:
📌 Decoupling Construction from Representation: In many real-world scenarios, the process of
constructing an object is distinct from the object's representation. For instance, consider the
process of building a house. The steps to build the house (laying the foundation, erecting walls,
installing the roof) are the same, but the final representation (design, color, interior) can vary. The
builder pattern allows us to encapsulate these construction steps and use them to create various
representations.

📌 Managing Complex Initializations: Sometimes, objects require multiple steps for


initialization. Directly initializing such objects can lead to cumbersome and error-prone code. The
builder pattern provides a clear and concise way to initialize such objects step by step.

📌 Fluent Interface: The builder pattern often provides a fluent interface, where methods return
the builder object itself, allowing for method chaining. This makes the client code more readable
and intuitive.

📌 Immutable Objects: Once the object is built, it can be made immutable, ensuring that its state
cannot be changed. This is particularly useful in multi-threaded environments where immutability
can prevent potential synchronization issues.

Real-life Use-case Code:


Imagine you're building a system for a car manufacturing company. Cars have multiple parts and
configurations, and the process to assemble them is sequential. Let's use the builder pattern to
construct a car.

class Car:
def __init__(self):
self._parts = []

def add(self, part):


self._parts.append(part)

def list_parts(self):
return ", ".join(self._parts)

class CarBuilder:
def __init__(self):
self._car = Car()

def add_engine(self):
self._car.add("Engine")
return self

def add_wheels(self):
self._car.add("Wheels")
return self

def add_doors(self):
self._car.add("Doors")
return self

def build(self):
78
return self._car

# Client code
builder = CarBuilder()
car = builder.add_engine().add_wheels().add_doors().build()
print(car.list_parts())
# Engine, Wheels, Doors

Description:
📌 In the above code, the Car class represents the product we want to build. It has a method
add to add parts and a method list_parts to list all the added parts.

📌 The CarBuilder class is our builder. It provides methods to add different parts to the car
( add_engine , add_wheels , add_doors ). Each of these methods returns the builder object itself,
allowing for method chaining.

📌 In the client code, we create an instance of the CarBuilder , sequentially add parts using the
fluent interface, and finally call the build method to get the constructed Car object.

📌 The advantage here is that the construction process of the car is abstracted away from its
representation. We can easily change the way a car is built without affecting the car's
representation or the client code.

In essence, the builder pattern provides a clear separation of concerns, making the code modular
and maintainable. It's particularly useful when an object needs to be created with many optional
components or configurations.

Let's see an example WITH and then WITHOUT the "Builder


design pattern in Python"
📌 Without Builder Design Pattern
Consider a scenario where we want to create a Computer object. A computer has several
components like CPU, RAM, storage, graphics card, etc. Let's see how one might create such an
object without using the builder pattern:

class Computer:
def __init__(self, CPU, RAM, storage, graphics_card, power_supply,
motherboard):
self.CPU = CPU
self.RAM = RAM
self.storage = storage
self.graphics_card = graphics_card
self.power_supply = power_supply
self.motherboard = motherboard

def display(self):
return f"Computer with {self.CPU} CPU, {self.RAM} RAM, {self.storage}
storage, {self.graphics_card} graphics card, {self.power_supply} power supply,
and {self.motherboard} motherboard."

# Creating a computer object

79
computer = Computer("Intel i9", "32GB", "1TB SSD", "NVIDIA RTX 3090", "750W",
"ASUS ROG")
print(computer.display())

📌 Issues with the above approach:


1. The constructor of the Computer class is too long and can be error-prone. If we miss the
order of the arguments, we might end up initializing the wrong attributes.

2. If we want to create a computer with only a few components and leave out others, this design
doesn't allow for that flexibility.

3. The construction process is tightly coupled with the representation of the computer.

📌 With Builder Design Pattern


To solve the above issues, we can use the builder design pattern. Here's how we can refactor the
code:

class Computer:
def __init__(self):
self.components = {}

def set_CPU(self, CPU):


self.components["CPU"] = CPU

def set_RAM(self, RAM):


self.components["RAM"] = RAM

def set_storage(self, storage):


self.components["storage"] = storage

def set_graphics_card(self, graphics_card):


self.components["graphics_card"] = graphics_card

def set_power_supply(self, power_supply):


self.components["power_supply"] = power_supply

def set_motherboard(self, motherboard):


self.components["motherboard"] = motherboard

def display(self):
return ", ".join([f"{key} with {value}" for key, value in
self.components.items()])

class ComputerBuilder:
def __init__(self):
self.computer = Computer()

def add_CPU(self, CPU):


self.computer.set_CPU(CPU)
return self

def add_RAM(self, RAM):


self.computer.set_RAM(RAM)
return self

80
def add_storage(self, storage):
self.computer.set_storage(storage)
return self

def add_graphics_card(self, graphics_card):


self.computer.set_graphics_card(graphics_card)
return self

def add_power_supply(self, power_supply):


self.computer.set_power_supply(power_supply)
return self

def add_motherboard(self, motherboard):


self.computer.set_motherboard(motherboard)
return self

def build(self):
return self.computer

# Using the builder to create a computer object


builder = ComputerBuilder()
computer = (builder.add_CPU("Intel i9")
.add_RAM("32GB")
.add_storage("1TB SSD")
.add_graphics_card("NVIDIA RTX 3090")
.build())

print(computer.display())

📌 Advantages of using the Builder Pattern:


1. The construction process is decoupled from the representation of the Computer object.

2. The builder provides a clear and fluent interface to create a Computer object step by step.

3. It's flexible; we can choose which components to add and in which order.

4. The code is more maintainable and less error-prone.

By using the builder pattern, we've made the process of creating a Computer object more
intuitive, flexible, and less prone to errors.

Let's break down how the refactored code with the builder
design pattern addresses the issues of the original code.
📌 Issue 1: The constructor of the Computer class is too long and can be error-prone.

Solution: In the refactored code, the Computer class no longer has a long constructor.
Instead, each component has its own setter method. The ComputerBuilder class provides a
clear interface to add components to the Computer object. This way, there's no need to
remember the order of arguments, reducing the chances of errors.

📌 Issue 2: If we want to create a computer with only a few components and leave out
others, the original design doesn't allow for that flexibility.

81
Solution: With the builder pattern, we have the flexibility to add only the components we
want. If we decide not to add a certain component, we simply don't call its corresponding
method in the builder. For instance, if we don't want to add a graphics card, we can omit the
.add_graphics_card() method when constructing the computer. This provides a more
flexible approach to object creation.

📌 Issue 3: The construction process is tightly coupled with the representation of the
computer.

Solution: The builder pattern decouples the construction process from the representation. In
the refactored code, the Computer class is only responsible for representing the computer
and storing its components. The construction process is handled by the ComputerBuilder
class. This separation of concerns makes the code more modular and easier to maintain.

📌 Additional Benefits of the Builder Pattern in the Refactored Code:


1. Fluent Interface: The builder pattern provides a fluent interface, allowing for method
chaining. This makes the code more readable and intuitive. For instance, when creating a
computer, we can chain methods like .add_CPU().add_RAM().add_storage() , making the
construction process clear and concise.

2. Scalability: If we want to introduce new components in the future, we can easily do so by


adding new methods to the Computer and ComputerBuilder classes without altering the
existing code. This ensures that our design is scalable and adheres to the open/closed
principle (software entities should be open for extension but closed for modification).

3. Clear Separation of Responsibilities: The builder pattern ensures that each class has a
single responsibility. The Computer class is only responsible for representing a computer,
while the ComputerBuilder class handles the construction process. This separation makes
the code more organized and adheres to the single responsibility principle.

In summary, by implementing the builder design pattern, we've addressed the issues of the
original code, making it more flexible, maintainable, and less error-prone. The builder pattern
provides a clear and intuitive interface for object construction, ensuring that the code is scalable
and adheres to good software design principles.

Merge Classes using the Builder Design Pattern


Merging classes using the Builder Design Pattern involves creating a builder that can take multiple
instances of different classes and combine their attributes or methods into a new, merged class.
This can be useful in scenarios where you want to create composite objects from multiple sources
without altering the original classes.

Let's walk through the process step by step:

📌 Understanding the Requirement: Suppose you have two classes, ClassA and ClassB . You
want to merge attributes and methods from both classes into a new class, MergedClass .

📌 Designing the Builder: The builder should be able to: 1. Accept instances of ClassA and
ClassB . 2. Extract attributes and methods from these instances. 3. Construct a new MergedClass
with the combined attributes and methods.

📌 Implementation:

class ClassA:
82
def __init__(self, attr_a):
self.attr_a = attr_a

def method_a(self):
return f"Method A called with attribute {self.attr_a}"

class ClassB:
def __init__(self, attr_b):
self.attr_b = attr_b

def method_b(self):
return f"Method B called with attribute {self.attr_b}"

class MergedClassBuilder:
def __init__(self):
self.attrs = {}
self.methods = {}

def add_class_a(self, instance_a):


self.attrs['attr_a'] = instance_a.attr_a
self.methods['method_a'] = instance_a.method_a
return self

def add_class_b(self, instance_b):


self.attrs['attr_b'] = instance_b.attr_b
self.methods['method_b'] = instance_b.method_b
return self

def build(self):
merged = type("MergedClass", (), {})
for attr, value in self.attrs.items():
setattr(merged, attr, value)
for method_name, method in self.methods.items():
setattr(merged, method_name, method)
return merged

# Usage:
instance_a = ClassA("Attribute A")
instance_b = ClassB("Attribute B")

builder = MergedClassBuilder()
MergedClass = builder.add_class_a(instance_a).add_class_b(instance_b).build()

merged_instance = MergedClass()

print(merged_instance.method_a())
print(merged_instance.method_b())

And I will get as the output

Method A called with attribute Attribute A


Method B called with attribute Attribute B
83
📌 Explanation:
1. ClassA and ClassB are two simple classes with one attribute and one method each.

2. MergedClassBuilder is designed to merge instances of these classes. It has methods


add_class_a and add_class_b to add instances of ClassA and ClassB , respectively.

3. The build method of the builder creates a new class, MergedClass , with combined
attributes and methods from the added instances.

4. In the usage section, instances of ClassA and ClassB are created and added to the builder.
The builder then creates the MergedClass , and we instantiate and use it.

This approach allows for dynamic merging of classes using the builder pattern. It's a flexible way
to combine attributes and methods from multiple classes into a single class without modifying the
original classes.

Let's see another example WITH and then WITHOUT the


"Builder design pattern in Python`"
Let's consider another real-world example: Creating a comprehensive profile for an employee in a
large corporate organization.

Non-Builder Approach:
The direct initialization of such profiles can be cumbersome given the plethora of attributes an
employee might have.

class EmployeeProfile:
def __init__(self, name, age, address, phone, email, position, department,
salary, manager, hire_date, previous_jobs):
self.name = name
self.age = age
self.address = address
self.phone = phone
self.email = email
self.position = position
self.department = department
self.salary = salary
self.manager = manager
self.hire_date = hire_date
self.previous_jobs = previous_jobs

def display(self):
# Logic to display profile details.
pass

# Usage:
john_profile = EmployeeProfile("John Doe", 30, "123 St, City", "123-456-7890",
"john@example.com", "Engineer", "IT", 60000, "Jane Smith", "2022-01-01",
["Developer", "Intern"])

📌 Issues:
1. Lengthy Constructor: As evident, the constructor is lengthy and hard to manage.
84
2. Inflexibility: Not all attributes might be available at once when creating a profile.

3. Error-prone: Easy to misplace arguments given the large number of parameters.

4. Difficult to Extend: Adding more attributes to the profile would mean changing the
constructor and all instantiations.

Builder Pattern Approach:


Refactoring the above example using the Builder pattern:

class EmployeeProfileBuilder:
def __init__(self):
self.profile_data = {}

def set_name(self, value):


self.profile_data["name"] = value
return self

def set_age(self, value):


self.profile_data["age"] = value
return self

def set_address(self, value):


self.profile_data["address"] = value
return self

# ... similar methods for other attributes ...

def build(self):
return FinalEmployeeProfile(self.profile_data)

class FinalEmployeeProfile:
def __init__(self, profile_data):
self.profile_data = profile_data

def display(self):
# Logic to display profile details.
pass

# Usage:
builder = EmployeeProfileBuilder()
john_profile = (builder.set_name("John Doe")
.set_age(30)
.set_address("123 St, City")
# ... set other attributes ...
.build())

📌 Benefits of the Builder Pattern Approach:


1. Structured Initialization: Each attribute has its setter method, making it clear what you're
setting.

2. Flexibility: Can set only the attributes that are available at the time of profile creation.

85
3. Reduced Errors: The self-documenting nature of named methods minimizes the chance of
errors.

4. Easy Extension: If a new attribute is to be added to the profile in the future, just add a new
setter method to the builder without affecting existing client code.

By leveraging the Builder pattern in this context, we've streamlined the process of constructing an
employee profile, making the code more manageable, readable, and scalable.

Let's actually analyze in detail the benefits of the refactored


code after introducing the "Builder design pattern in
Python`"
Let's break down the advantages of the refactored code using the Builder design pattern over the
original approach for the EmployeeProfile example.

1. Lengthy Constructor:
📌 Original Issue: In the initial approach, the EmployeeProfile class had a constructor with a
significant number of parameters. With such a setup, it becomes cumbersome to remember the
order of parameters, and it can also lead to a situation where some of them might be mistakenly
swapped or missed.

📌 Solution with Builder Pattern: The builder pattern breaks down the construction process by
providing individual methods for each attribute ( set_name , set_age , set_address , etc.). This
provides a clear structure for setting up the employee profile without having to remember or deal
with a long list of constructor arguments.

2. Inflexibility in Object Initialization:


📌 Original Issue: The direct initialization method was rigid. If one wished to create an
EmployeeProfile without certain attributes, the entire constructor and all its usages would have
to be adjusted.

📌 Solution with Builder Pattern: With the builder, one can choose which attributes to set. This
is handy when not all details about an employee are available immediately. The profile can be
built incrementally, setting only the available attributes, leading to a more flexible initialization
process.

3. Error-Prone Initialization:
📌 Original Issue: Due to the many parameters in the original constructor, there's a heightened
risk of errors. It's easy to misplace arguments or mistakenly swap their order, which can lead to
incorrect object initialization.

📌 Solution with Builder Pattern: The use of descriptive method names for setting attributes
ensures that the developer knows exactly which attribute is being set at each step. This
significantly reduces the risk of accidentally setting incorrect values.

86
4. Scalability and Maintenance:
📌 Original Issue: If the organization decided to store additional details about an employee, it
would require changes to the constructor and all places where the EmployeeProfile was
instantiated.

📌 Solution with Builder Pattern: Introducing new attributes becomes straightforward with the
builder pattern. One can simply add a new setter method in the EmployeeProfileBuilder class.
Existing code where profiles are created remains unaffected, ensuring seamless integration of
new features without disturbing existing functionalities.

5. Readability:
📌 Original Issue: With a long list of parameters, it becomes hard to discern what each
parameter signifies, especially if they're of the same data type.

📌 Solution with Builder Pattern: The refactored approach, with its fluent interface, provides a
step-by-step, readable structure for creating an object. The sequence of method calls, due to their
descriptive names, provides clarity and can be read almost like a series of straightforward
instructions.

In conclusion, the Builder design pattern brings a plethora of advantages when constructing
objects with numerous attributes or complex initialization steps. By providing a clear separation
between the construction process and the final representation, it ensures flexibility, robustness,
and maintainability.

🐍🚀 Chain of Responsibility Design pattern 🐍


🚀

87
📌 The Chain of Responsibility pattern is a behavioral design pattern that allows an object to pass
a request through a chain of potential handlers until an object handles it or the end of the chain is
reached. It decouples the sender from the receiver by letting more than one object handle a
request.

📌 Use Cases: - Event handling systems where events can be handled by multiple handlers, and
handlers have a priority or a logic to decide if they should handle the event or pass it on. -
Middleware in web frameworks where each middleware processes a request and then passes it to
the next middleware in the chain. - Input validation systems where various validation checks are
applied one after the other.

📌 The primary advantage of this pattern is that it reduces the coupling between the sender of a
request and its receivers. It also allows for dynamic addition or removal of responsibilities from
objects.

Let's see an example WITH and then WITHOUT the "Chain of


Responsibility Design Pattern in Python"
📌 Without Chain of Responsibility Design Pattern:
Consider a scenario where we have a system that processes different types of files. We have a
single FileProcessor class that tries to handle all types of files.

class FileProcessor:
def process(self, file_type, content):
if file_type == "text":
print(f"Processing text file with content: {content}")
elif file_type == "image":
print(f"Processing image file with content: {content}")
elif file_type == "audio":
print(f"Processing audio file with content: {content}")
else:
print(f"File type {file_type} not supported")

# Usage
processor = FileProcessor()
processor.process("text", "Hello World!")
processor.process("video", "Video Content")

📌 Issues with the above approach:


1. The FileProcessor class is responsible for handling all file types, making it less modular and
harder to maintain.

2. If we need to add support for a new file type, we have to modify the FileProcessor class,
violating the Open/Closed Principle.

3. If there's a specific order in which files need to be processed, it's hard to manage with the
current structure.

📌 With Chain of Responsibility Design Pattern:


We'll refactor the code to have different handlers for each file type. Each handler will have a
reference to the next handler in the chain. If a handler can't process the file, it passes the request
to the next handler.

88
class Handler:
def __init__(self, next_handler=None):
self.next_handler = next_handler

def handle(self, file_type, content):


pass

class TextFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "text":
print(f"Processing text file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)

class ImageFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "image":
print(f"Processing image file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)

class AudioFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "audio":
print(f"Processing audio file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)

# Setting up the chain


audio_handler = AudioFileHandler()
image_handler = ImageFileHandler(audio_handler)
text_handler = TextFileHandler(image_handler)

# Usage
text_handler.handle("text", "Hello World!")
text_handler.handle("video", "Video Content")

📌 Advantages of using the Chain of Responsibility:


1. Each handler is now responsible for a single type of file, making the system more modular.

2. New file types can be added without modifying existing handlers.

3. The order of processing can be easily managed by rearranging the chain.

📌 Conclusion:
The Chain of Responsibility pattern allows us to decouple the sender from the receiver and
provides a way to pass a request through a set of handlers. It promotes single responsibility and
open/closed principles, making the system more flexible and maintainable.

89
Let's delve into the details of how the refactored code, which
implements the Chain of Responsibility design pattern,
addresses the issues present in the original code.
📌 Issue 1: Lack of Modularity and Maintainability
Original Code: In the initial code, the FileProcessor class was responsible for handling all
file types. This means that every time a new file type needs to be added or an existing one
needs to be modified, you'd have to change the FileProcessor class. This makes the class
less modular and harder to maintain.

Refactored Code: With the Chain of Responsibility pattern, each file type has its own
dedicated handler ( TextFileHandler , ImageFileHandler , AudioFileHandler ). This
separation ensures that each handler class has a single responsibility, making the system
more modular. If there's a need to modify how a particular file type is processed, only the
corresponding handler needs to be touched.

📌 Issue 2: Violation of the Open/Closed Principle


Original Code: The Open/Closed Principle states that software entities should be open for
extension but closed for modification. In the original code, to support a new file type, you'd
have to modify the FileProcessor class, which violates this principle.

Refactored Code: With the Chain of Responsibility pattern, adding support for a new file type
doesn't require modifying existing handlers. Instead, you'd create a new handler for the new
file type and simply link it into the existing chain. This ensures that the system is extendable
without needing to change existing code, adhering to the Open/Closed Principle.

📌 Issue 3: Difficulty in Managing Processing Order


Original Code: If there's a specific order in which files need to be processed, managing this
order would be challenging with the initial structure. You'd have to rearrange conditions and
ensure that the logic for each file type is correctly placed.

Refactored Code: With the Chain of Responsibility pattern, managing the order of processing
becomes straightforward. The order is determined by how the chain of handlers is
constructed. If you need to change the order, you can easily rearrange the chain without
touching the internal logic of individual handlers.

📌 Additional Benefits:
Flexibility: The Chain of Responsibility pattern provides flexibility in distributing
responsibilities among handler objects. If, in the future, a certain handler needs to perform
additional checks or operations before deciding whether to handle a request or pass it on, it
can be done without affecting other handlers.

Decoupling: The sender of a request (in this case, the client code calling the handle method)
is decoupled from its receivers (the chain of handlers). The client only interacts with the first
handler in the chain and doesn't need to know about the internal structure of the chain.

In conclusion, the Chain of Responsibility design pattern provides a robust solution to the issues
present in the original code, making the system more modular, maintainable, and in line with solid
software design principles.

📌 Real-life Use-Case Code:


90
Imagine a system where we process a user's request to access a resource. The request goes
through several checks: authentication, authorization, and logging.

class Handler:
def __init__(self, successor=None):
self.successor = successor

def handle(self, request):


if self.successor:
self.successor.handle(request)

class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")

class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
else:
print("Authorization failed!")

class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)

# Client code
request = {
"user_id": 123,
"token": "VALID_TOKEN",
"role": "ADMIN"
}

chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))
chain.handle(request)

📌 Description of the Example Code:


We have a base Handler class that can optionally have a successor. If the current handler
can't handle the request, it passes it to its successor.

AuthenticationHandler checks if the provided token is valid.

AuthorizationHandler checks if the user has the required role.

LoggingHandler logs the user's request.

91
In the client code, a request is created with a user ID, token, and role. The request is then
passed through a chain of handlers: Logging -> Authorization -> Authentication.

If the request passes through all the handlers without any issues, it means it's been
successfully processed. If any handler can't process the request, it won't pass it to its
successor.

📌 Under the hood, this pattern promotes the Single Responsibility Principle. Each handler has a
single responsibility, and it either handles the request or passes it to the next handler. This makes
the system extensible and easy to maintain. If a new check needs to be added, a new handler can
be introduced without modifying the existing code.

📌 The Chain of Responsibility pattern is a behavioral design pattern that allows an object to pass
a request through a chain of potential handlers until an object handles it or the end of the chain is
reached. It decouples the sender from the receiver by letting more than one object handle a
request.

📌 Use Cases: - Event handling systems where events can be handled by multiple handlers, and
handlers have a priority or a logic to decide if they should handle the event or pass it on. -
Middleware in web frameworks where each middleware processes a request and then passes it to
the next middleware in the chain. - Input validation systems where various validation checks are
applied one after the other.

📌 The primary advantage of this pattern is that it reduces the coupling between the sender of a
request and its receivers. It also allows for dynamic addition or removal of responsibilities from
objects.

📌 Real-life Use-Case Code:


Imagine a system where we process a user's request to access a resource. The request goes
through several checks: authentication, authorization, and logging.

class Handler:
def __init__(self, successor=None):
self.successor = successor

def handle(self, request):


if self.successor:
self.successor.handle(request)

class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")

class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
92
else:
print("Authorization failed!")

class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)

# Client code
request = {
"user_id": 123,
"token": "VALID_TOKEN",
"role": "ADMIN"
}

chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))
chain.handle(request)

📌 Description of the Example Code:


We have a base Handler class that can optionally have a successor. If the current handler
can't handle the request, it passes it to its successor.

AuthenticationHandler checks if the provided token is valid.

AuthorizationHandler checks if the user has the required role.

LoggingHandler logs the user's request.

In the client code, a request is created with a user ID, token, and role. The request is then
passed through a chain of handlers: Logging -> Authorization -> Authentication.

If the request passes through all the handlers without any issues, it means it's been
successfully processed. If any handler can't process the request, it won't pass it to its
successor.

📌 Under the hood, this pattern promotes the Single Responsibility Principle. Each handler has a
single responsibility, and it either handles the request or passes it to the next handler. This makes
the system extensible and easy to maintain. If a new check needs to be added, a new handler can
be introduced without modifying the existing code.

Let's see how the above code example adheres to the


principles and requirements of the Chain of Responsibility
design pattern in Python
📌 Decoupling of Sender and Receiver: In the provided code, the client (or sender) that initiates
the request doesn't have any knowledge about which handler (or receiver) in the chain will
process the request. The client only interacts with the head of the chain ( LoggingHandler in this
case) and has no direct interaction with AuthenticationHandler or AuthorizationHandler .

📌 Chain of Handlers: The code establishes a clear chain of handlers. The LoggingHandler
passes the request to AuthorizationHandler , which in turn passes it to
AuthenticationHandler . This chain is established during the instantiation of the handlers:
93
chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))

Here, the LoggingHandler is given AuthorizationHandler as its successor, and


AuthorizationHandler is given AuthenticationHandler as its successor.

📌 Single Responsibility Principle: Each handler in the chain has a specific responsibility. -
AuthenticationHandler is solely responsible for checking the validity of the token. -
AuthorizationHandler checks the role of the user. - LoggingHandler logs the request.

This ensures that each handler is only concerned with a specific task, making the code modular
and easy to modify or extend.

class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")

class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
else:
print("Authorization failed!")

class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)

📌 Stopping or Continuing the Chain: One of the key aspects of the Chain of Responsibility
pattern is the ability of any handler in the chain to stop further processing. In the provided code, if
the AuthenticationHandler finds an invalid token, it prints "Authentication failed!" and doesn't
call its successor. Similarly, if the AuthorizationHandler finds an unauthorized role, it won't pass
the request to its successor.

📌 Dynamic Chain Configuration: The chain's configuration is dynamic and can be easily
changed without altering the internal logic of the handlers. For instance, if you wanted to add a
new handler or change the order of the existing handlers, you could do so by simply reconfiguring
the chain during instantiation, without needing to modify the handler classes themselves.

📌 Extensibility: If a new type of handling or check is required, a new handler class can be
created without altering the existing handlers. This new handler can then be integrated into the
chain as needed. For example, if there's a need to add a handler that checks for the user's region,
a RegionCheckHandler can be created and added to the chain.

94
In summary, the provided code example adheres to the Chain of Responsibility pattern by
ensuring decoupling between the sender and receivers, maintaining a clear chain of handlers,
adhering to the Single Responsibility Principle, providing the ability to stop or continue the chain
based on conditions, allowing for dynamic chain configuration, and ensuring extensibility.

Example -2

Let's consider a more complex scenario: an e-commerce order processing system. When an order
is placed, it goes through various stages:

1. Validation: Ensure the order details are complete and valid.

2. Discount Application: Apply any eligible discounts to the order.

3. Stock Check: Ensure all items in the order are in stock.

4. Payment Processing: Process the payment for the order.

5. Shipping: If all previous steps are successful, prepare the order for shipping.

Here's how we can implement this using the Chain of Responsibility pattern:

class Handler:
def __init__(self, successor=None):
self.successor = successor

def handle(self, order):


if self.successor:
self.successor.handle(order)

class ValidationHandler(Handler):
def handle(self, order):
if order.get("address") and order.get("items"):
print("Order validation successful!")
super().handle(order)
else:
print("Order validation failed!")

class DiscountHandler(Handler):
def handle(self, order):
if order.get("loyalty_member"):
order["total"] *= 0.9 # Apply 10% discount
print("Discount applied!")
super().handle(order)

class StockCheckHandler(Handler):
def handle(self, order):
items_in_stock = True
for item in order.get("items", []):
if item["stock"] <= 0:
items_in_stock = False
print(f"Item {item['name']} is out of stock!")
break
if items_in_stock:
95
super().handle(order)

class PaymentHandler(Handler):
def handle(self, order):
if order.get("payment_method") == "credit_card" and order.get("total") <=
order.get("credit_limit"):
print("Payment processed successfully!")
super().handle(order)
else:
print("Payment processing failed!")

class ShippingHandler(Handler):
def handle(self, order):
print(f"Order for {order['address']} is ready for shipping!")

# Client code
order = {
"address": "123 Main St",
"items": [{"name": "laptop", "stock": 5}, {"name": "mouse", "stock": 10}],
"loyalty_member": True,
"payment_method": "credit_card",
"total": 1000,
"credit_limit": 1500
}

chain =
ValidationHandler(DiscountHandler(StockCheckHandler(PaymentHandler(ShippingHandle
r()))))
chain.handle(order)

📌 Description of the Example Code:


The ValidationHandler ensures that the order has a valid address and items.

The DiscountHandler checks if the user is a loyalty member and applies a discount if they
are.

The StockCheckHandler ensures all items in the order are in stock.

The PaymentHandler processes the payment, ensuring the user has enough credit.

The ShippingHandler prepares the order for shipping if all previous steps are successful.

Each handler in the chain has a specific responsibility and passes the order to the next handler if
its conditions are met. If any handler finds an issue (e.g., an item is out of stock), it won't pass the
order to its successor.

96
Let's see how the above code example adheres to the
principles and requirements of the Chain of Responsibility
design pattern in Python
📌 Decoupling of Sender and Receiver: The client that initiates the order processing doesn't
know which handler in the chain will process the order or in which sequence. The client only
interacts with the head of the chain ( ValidationHandler in this case) and remains decoupled
from the rest of the handlers. This is evident from the client code where the order is passed to the
chain without specifying individual handlers:

chain =
ValidationHandler(DiscountHandler(StockCheckHandler(PaymentHandler(ShippingHandle
r()))))
chain.handle(order)

📌 Chain of Handlers: The code establishes a clear sequence of handlers. The


ValidationHandler passes the order to DiscountHandler , which then passes it to
StockCheckHandler , and so on. This chain is constructed during the instantiation of the handlers,
ensuring a specific order of processing.

📌 Single Responsibility Principle: Each handler in the chain has a distinct responsibility: -
ValidationHandler : Validates order details. - DiscountHandler : Applies eligible discounts. -
StockCheckHandler : Checks stock availability for items. - PaymentHandler : Processes the
payment. - ShippingHandler : Prepares the order for shipping. This modular approach ensures
that each handler focuses on one specific task, making the system organized and maintainable.

📌 Stopping or Continuing the Chain: Handlers in the chain have the discretion to stop further
processing based on certain conditions. For instance, if ValidationHandler finds the order
details incomplete, it won't pass the order to its successor. Similarly, if StockCheckHandler
identifies an out-of-stock item, it won't proceed to the PaymentHandler . This behavior is evident
in sections like:

if item["stock"] <= 0:
items_in_stock = False
print(f"Item {item['name']} is out of stock!")
break

📌 Dynamic Chain Configuration: The sequence and composition of the chain can be modified
without altering the internal logic of individual handlers. If a new processing step is needed (e.g., a
tax calculation handler), it can be added to the chain without modifying existing handler classes.

📌 Extensibility: The design allows for easy addition of new handlers. If there's a need to
introduce a new step in the order processing (e.g., a gift-wrapping handler), a new handler class
can be created and integrated into the chain seamlessly.

In conclusion, the e-commerce order processing system code adheres to the Chain of
Responsibility pattern by ensuring a clear sequence of handlers, maintaining the Single
Responsibility Principle, allowing handlers to decide whether to continue or stop the chain, and
offering flexibility in chain configuration and extensibility.

97
🐍🚀 Command Design Pattern in Python. 🐍🚀

Command Design Pattern is a behavioral design pattern that turns a request into a stand-alone
object that contains all information about the request. This transformation lets you pass requests
as a method arguments, delay or queue a request's execution, and support undoable operations.

📌 The Command Design Pattern is a behavioral pattern that encapsulates a request as an object,
thereby allowing users to parameterize clients with different requests, queue requests, and
support operations like undo and redo. It decouples the sender from the receiver.

1. Precise Explanations and Use Cases:


📌 Encapsulation of Requests: At its core, the Command Pattern is about encapsulating a
method invocation or request as an object. This means that instead of calling a method directly,
you create an object that represents that call and can be stored, passed around, and executed at
will.

📌 Decoupling: The pattern decouples the object that invokes the command (often referred to as
the sender) from the object that knows how to execute the command (the receiver). This
separation provides flexibility in terms of the operations that can be performed without having to
change existing code.

📌 Use Cases:
Menu Systems: Imagine a GUI application with a menu. Each menu item is a command.
When you select a menu item, it executes a command. By using the Command Pattern, you
can easily add new menu items without changing existing code.

Undo/Redo: Text editors or graphic design software often have undo and redo
functionalities. Each action on the document can be a command. When you want to undo an
action, you simply call the undo method on the command.

Task Scheduling: In systems where tasks need to be scheduled, like cron jobs, each task can
be a command. The scheduler simply executes the command when the time comes.

98
Macro Recording: Some software allows users to record a series of actions as a macro. Each
action is a command. The macro simply consists of a list of commands that can be played
back in order.

Let's see an example WITH and then WITHOUT the


"Command Design Pattern in Python"
📌 Without Command Design Pattern:
Imagine you're building a simple remote control for electronic devices. Without the Command
Pattern, you might have something like this:

class Light:
def turn_on(self):
print("Light is ON")

def turn_off(self):
print("Light is OFF")

class Fan:
def start(self):
print("Fan is STARTED")

def stop(self):
print("Fan is STOPPED")

class RemoteControl:
def __init__(self):
self._buttons = {}

def set_command(self, slot, device, command):


self._buttons[slot] = (device, command)

def press_button(self, slot):


device, command = self._buttons.get(slot, (None, None))
if command == "on":
device.turn_on()
elif command == "off":
device.turn_off()
elif command == "start":
device.start()
elif command == "stop":
device.stop()

light = Light()
fan = Fan()

remote = RemoteControl()
remote.set_command(1, light, "on")
remote.set_command(2, light, "off")
remote.set_command(3, fan, "start")
remote.set_command(4, fan, "stop")

remote.press_button(1)

99
remote.press_button(3)

📌 Issues with the above approach:


1. The RemoteControl class is tightly coupled with the devices. If we add a new device or
command, we need to modify the RemoteControl class.

2. The logic for each command is embedded in the press_button method, making it less
modular and harder to extend.

3. Undoing a command or queuing commands becomes complex.

📌 With Command Design Pattern:


Let's refactor the above code using the Command Design Pattern.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

@abstractmethod
def undo(self):
pass

class LightOnCommand(Command):
def __init__(self, light):
self._light = light

def execute(self):
self._light.turn_on()

def undo(self):
self._light.turn_off()

class LightOffCommand(Command):
def __init__(self, light):
self._light = light

def execute(self):
self._light.turn_off()

def undo(self):
self._light.turn_on()

class FanStartCommand(Command):
def __init__(self, fan):
self._fan = fan

def execute(self):
self._fan.start()

def undo(self):
self._fan.stop()
100
class FanStopCommand(Command):
def __init__(self, fan):
self._fan = fan

def execute(self):
self._fan.stop()

def undo(self):
self._fan.start()

class RemoteControl:
def __init__(self):
self._buttons = {}
self._undo_command = None

def set_command(self, slot, command):


self._buttons[slot] = command

def press_button(self, slot):


command = self._buttons.get(slot)
if command:
command.execute()
self._undo_command = command

def press_undo(self):
if self._undo_command:
self._undo_command.undo()

light = Light()
fan = Fan()

remote = RemoteControl()
remote.set_command(1, LightOnCommand(light))
remote.set_command(2, LightOffCommand(light))
remote.set_command(3, FanStartCommand(fan))
remote.set_command(4, FanStopCommand(fan))

remote.press_button(1)
remote.press_button(3)
remote.press_undo()

📌 Benefits of the Command Design Pattern:


1. The RemoteControl class is now decoupled from specific devices and commands. It only
interacts with the command interface.

2. Commands are modular and can be easily added or removed without changing the
RemoteControl class.

3. We can easily implement undo functionality.

4. Commands can be queued or logged if needed.

This refactoring using the Command Design Pattern makes the code more flexible, modular, and
easier to maintain.

101
Let's break down how the refactored code, which
implements the Command Design Pattern, addresses the
issues of the original code.
📌 Issue 1: Tightly Coupled RemoteControl and Devices

Original Code: In the initial code, the RemoteControl class had direct knowledge of the
devices ( Light , Fan ) and their methods ( turn_on , turn_off , start , stop ). This meant
that for every new device or command, the RemoteControl class would need modifications.

Refactored Code: With the Command Design Pattern, the RemoteControl class only
interacts with the Command interface. It doesn't need to know about specific devices or their
methods. The specific commands ( LightOnCommand , LightOffCommand , etc.) encapsulate the
device and its operation. This decouples the RemoteControl from the devices, making the
system more modular.

📌 Issue 2: Embedded Logic for Each Command


Original Code: The logic for executing each command was embedded within the
press_button method of the RemoteControl class. This made the method lengthy, less
readable, and harder to extend.

Refactored Code: Each command now has its own class that implements the Command
interface. The logic for executing the command is encapsulated within the execute method
of these command classes. This makes the code more organized, and adding new commands
becomes as simple as creating a new class that implements the Command interface.

📌 Issue 3: Complexity in Implementing Undo or Queueing Commands


Original Code: Implementing features like undoing a command or queuing multiple
commands would have been complex, given the structure of the original code.

Refactored Code: With the Command Design Pattern, each command class can have an
undo method that defines how to revert the action. The RemoteControl class simply calls
this method to undo the last command. This makes implementing the undo feature
straightforward. Additionally, since commands are now stand-alone objects, queuing them (if
needed in the future) would be much simpler. You could easily store these command objects
in a list (queue) and execute them in order or delay their execution.

In summary, the Command Design Pattern offers a structured way to decouple the invoker
( RemoteControl ) from the receiver (devices like Light and Fan ). By encapsulating each request
as an object, the system becomes more flexible, allowing for easy addition of new commands,
undo operations, and potential queuing of commands. The refactored code is more maintainable,
scalable, and organized compared to the original implementation.

2. Real-life Use-case Code:


Imagine you're building a smart home system where you can control various devices like lights,
fans, and thermostats. Let's use the Command Pattern to encapsulate each operation as a
command.

from abc import ABC, abstractmethod

102
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

@abstractmethod
def undo(self):
pass

# Concrete Command
class LightOnCommand(Command):
def __init__(self, light):
self.light = light

def execute(self):
self.light.turn_on()

def undo(self):
self.light.turn_off()

class LightOffCommand(Command):
def __init__(self, light):
self.light = light

def execute(self):
self.light.turn_off()

def undo(self):
self.light.turn_on()

# Receiver
class Light:
def turn_on(self):
print("Light is ON")

def turn_off(self):
print("Light is OFF")

# Invoker
class RemoteControl:
def __init__(self):
self.command = None

def set_command(self, command):


self.command = command

def press_button(self):
self.command.execute()

def press_undo(self):
self.command.undo()

# Client Code
light = Light()
light_on = LightOnCommand(light)
103
light_off = LightOffCommand(light)

remote = RemoteControl()
remote.set_command(light_on)
remote.press_button() # Light is ON

remote.set_command(light_off)
remote.press_button() # Light is OFF

remote.press_undo() # Light is ON

3. Description of the Example Code:


📌 Command Interface: This is the base class for all commands. It has two methods: execute()
and undo() . Any concrete command will implement these methods to define what should
happen when the command is executed or undone.

📌 Concrete Commands: LightOnCommand and LightOffCommand are concrete implementations


of the Command interface. They encapsulate the action of turning a light on or off. They also have
an undo method to reverse the action.

📌 Receiver: The Light class is the receiver. It's the object that performs the actual action. In this
case, it can turn a light on or off.

📌 Invoker: The RemoteControl class is the invoker. It's the object that triggers the command. It
doesn't know anything about the concrete command, only about the command interface.

📌 Client Code: This is where everything comes together. We create a light, commands to turn it
on and off, and a remote control. We then set commands on the remote and press its buttons to
execute or undo actions.

In essence, the Command Pattern allows us to encapsulate method invocations, decouple senders
from receivers, and offer additional functionalities like undo and redo. This pattern is immensely
powerful and is widely used in software design.

Now let's understand in detail, how exactly the above code


for smart home implements the principles of Command
Design Pattern

Principles of Command Design Pattern:


1. Encapsulate a request as an object.

2. Decouple sender from receiver.

3. Allow for the parameterization of clients with different requests.

4. Support undoable operations.

104
How the Smart Home Code Implements These Principles:
📌 Encapsulate a request as an object:
In the code, each command (like turning the light on or off) is encapsulated as an object.
Specifically, LightOnCommand and LightOffCommand are objects that encapsulate the "turn
on" and "turn off" requests respectively.

Each of these command objects has an execute() method that carries out the request and
an undo() method to reverse it.

📌 Decouple sender from receiver:


The sender in this case is the RemoteControl (the invoker). It triggers the command but
doesn't know the specifics of the action.

The receiver is the Light class. It's the one that knows how to turn the light on or off.

The command objects ( LightOnCommand and LightOffCommand ) act as intermediaries


between the sender and receiver. The sender doesn't directly call methods on the receiver.
Instead, it calls the execute() method on the command object, which in turn calls the
appropriate method on the receiver.

📌 Allow for the parameterization of clients with different requests:


The RemoteControl can be parameterized with different commands. Using the
set_command() method, you can set any command you want the remote to execute when its
button is pressed.

This means you can easily add more commands (like commands for fans, thermostats, etc.)
and set them to the remote without changing the RemoteControl class.

📌 Support undoable operations:


Each command object has an undo() method. This allows the action to be reversed. For
example, if you turn the light off, you can undo that action to turn it back on.

The RemoteControl has a press_undo() method that calls the undo() method of the
currently set command.

Working of the Whole Code:


1. Command Interface ( Command class): This is a blueprint for all command objects. It ensures
that all commands will have execute() and undo() methods.

2. Concrete Commands ( LightOnCommand and LightOffCommand ): These are the actual


commands that implement the Command interface. They have:

A reference to the receiver ( light in this case).

An execute() method that performs the action.

An undo() method that reverses the action.

3. Receiver ( Light class): This is the actual object that performs the action. It has methods to
turn the light on and off.

4. Invoker ( RemoteControl class): This is the object that triggers the command. It has:

A reference to a command object.

A press_button() method that calls the execute() method of the command.


105
A press_undo() method that calls the undo() method of the command.
5. Client Code: This is where everything is tied together.

A Light object is created.

Command objects ( LightOnCommand and LightOffCommand ) are created and given a


reference to the Light object.

A RemoteControl object is created.

The command is set on the remote using set_command() .

The remote's button is pressed using press_button() , which in turn calls the
execute() method of the command.

The undo action can be triggered using press_undo() .

In essence, the Command Pattern in this code allows the actions (like turning the light on or off) to
be represented as objects. These objects can be passed around, stored, and executed as needed,
providing a flexible and decoupled system.

Note on the The @abstractmethod decorator - It indicates


that a method is abstract and must be overridden by any
non-abstract derived class." - Let's see how
📌 Decorator: In Python, a decorator is a design pattern that allows you to add new functionality
to an existing object without modifying its structure. Decorators are very powerful and useful tools
in Python since they allow programmers to modify the behavior of functions or classes. In our
context, abstractmethod is a decorator provided by the abc module.

📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.

📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.

📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.

Example for Clarity:

from abc import ABC, abstractmethod

class AbstractClass(ABC):

@abstractmethod
def abstract_method(self):
pass

class DerivedClass(AbstractClass):

106
# Notice we are not providing an implementation for abstract_method
pass

class AnotherDerivedClass(AbstractClass):

# Here, we provide an implementation for abstract_method


def abstract_method(self):
print("Implemented abstract_method in AnotherDerivedClass")

In the above code:

📌 AbstractClass is an abstract base class with an abstract method abstract_method .

📌 DerivedClass is a subclass of AbstractClass , but it doesn't provide an implementation for


abstract_method . Hence, DerivedClass is also abstract, and you can't create instances of it.

📌 AnotherDerivedClass is another subclass of AbstractClass , and it provides an


implementation for abstract_method . This makes AnotherDerivedClass non-abstract, and you
can create instances of it.

If you try:

obj = DerivedClass() # This will raise a TypeError

But this will work:

obj = AnotherDerivedClass()
obj.abstract_method() # This will print: "Implemented abstract_method in
AnotherDerivedClass"

In essence, the abstractmethod decorator is a way to enforce a contract on subclasses. It ensures


that any non-abstract subclass provides concrete implementations for certain methods deemed
essential by the abstract base class.

Example 2 - Real life use case of Command Design Pattern in


Python : simple text-based menu system
Let's design a simple text-based menu system for a media player application using the Command
Design Pattern. This media player can play, pause, stop, and rewind tracks.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class PlayCommand(Command):
def __init__(self, player):
self.player = player

107
def execute(self):
self.player.play()

class PauseCommand(Command):
def __init__(self, player):
self.player = player

def execute(self):
self.player.pause()

class StopCommand(Command):
def __init__(self, player):
self.player = player

def execute(self):
self.player.stop()

class RewindCommand(Command):
def __init__(self, player):
self.player = player

def execute(self):
self.player.rewind()

# Receiver
class MediaPlayer:
def play(self):
print("Playing the track.")

def pause(self):
print("Paused the track.")

def stop(self):
print("Stopped the track.")

def rewind(self):
print("Rewinded the track to the beginning.")

# Invoker (Menu System)


class Menu:
def __init__(self):
self.commands = {}

def set_command(self, name, command):


self.commands[name] = command

def select(self, name):


if name in self.commands:
self.commands[name].execute()
else:
print(f"'{name}' command not found!")

# Client Code
player = MediaPlayer()
menu = Menu()

108
menu.set_command("play", PlayCommand(player))
menu.set_command("pause", PauseCommand(player))
menu.set_command("stop", StopCommand(player))
menu.set_command("rewind", RewindCommand(player))

# Simulating user selecting menu items


menu.select("play") # Playing the track.
menu.select("pause") # Paused the track.
menu.select("stop") # Stopped the track.
menu.select("rewind") # Rewinded the track to the beginning.

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have an execute() method.

📌 Concrete Commands: These are the actual commands ( PlayCommand , PauseCommand ,


StopCommand , RewindCommand ) that implement the Command interface. They have: - A reference
to the receiver ( player in this case). - An execute() method that performs the action.

📌 Receiver ( MediaPlayer class): This is the actual object that performs the action. It has
methods to play, pause, stop, and rewind tracks.

📌 Invoker ( Menu class): This represents the menu system. It has: - A dictionary ( commands ) to
store menu items and their associated commands. - A set_command() method to add menu
items and their commands. - A select() method that simulates a user selecting a menu item. It
calls the execute() method of the associated command.

📌 Client Code: This is where everything is tied together. - A MediaPlayer object is created. - A
Menu object is created. - Commands are created and associated with menu items using
set_command() . - The user selects menu items using the select() method, which in turn calls
the appropriate action on the media player.

This design allows for easy addition of new menu items and their associated actions without
changing the existing code. For instance, if you wanted to add a "fast forward" feature, you'd
simply create a new command for it and add it to the menu.

Example 3 - Real life use case of Command Design Pattern in


Python : Undo/Redo in Text Editors
Let's design a simple text editor that supports adding text, deleting text, and of course, undo and
redo functionalities using the Command Design Pattern.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

@abstractmethod
def undo(self):
109
pass

# Concrete Commands
class AddTextCommand(Command):
def __init__(self, editor, text):
self.editor = editor
self.text = text
self.prev_text = ""

def execute(self):
self.prev_text = self.editor.content
self.editor.content += self.text

def undo(self):
self.editor.content = self.prev_text

class DeleteTextCommand(Command):
def __init__(self, editor, length):
self.editor = editor
self.length = length
self.prev_text = ""

def execute(self):
self.prev_text = self.editor.content
self.editor.content = self.editor.content[:-self.length]

def undo(self):
self.editor.content = self.prev_text

# Receiver
class TextEditor:
def __init__(self):
self.content = ""

# Invoker & History


class CommandInvoker:
def __init__(self):
self.history = []
self.redo_stack = []

def execute(self, command):


command.execute()
self.history.append(command)
self.redo_stack.clear()

def undo(self):
if not self.history:
return
command = self.history.pop()
command.undo()
self.redo_stack.append(command)

def redo(self):
if not self.redo_stack:
return
command = self.redo_stack.pop()
110
command.execute()
self.history.append(command)

# Client Code
editor = TextEditor()
invoker = CommandInvoker()

# Add some text


cmd1 = AddTextCommand(editor, "Hello, ")
invoker.execute(cmd1)
print(editor.content) # Hello,

cmd2 = AddTextCommand(editor, "world!")


invoker.execute(cmd2)
print(editor.content) # Hello, world!

# Delete some text


cmd3 = DeleteTextCommand(editor, 6)
invoker.execute(cmd3)
print(editor.content) # Hello,

# Undo the delete


invoker.undo()
print(editor.content) # Hello, world!

# Redo the delete


invoker.redo()
print(editor.content) # Hello,

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have execute() and undo() methods.

📌 Concrete Commands: The AddTextCommand and DeleteTextCommand classes implement the


Command interface. They have: - A reference to the receiver ( editor in this case). - An
execute() method that performs the action. - An undo() method that reverses the action.

📌 Receiver ( TextEditor class): This represents the text editor. It has a content attribute that
stores the current text.

📌 Invoker & History ( CommandInvoker class): This class manages the execution of commands
and maintains a history for undo and redo operations. It has: - A history list to store executed
commands. - A redo_stack list to store commands that can be redone. - An execute() method
to execute a command and add it to the history. - An undo() method to undo the last command. -
A redo() method to redo the last undone command.

📌 Client Code: This is where everything is tied together. - A TextEditor object is created. - A
CommandInvoker object is created. - Commands are created and executed using the invoker. - The
undo and redo functionalities are demonstrated.

This design allows for easy tracking of changes in the text editor. Each action (like adding or
deleting text) is encapsulated as a command, and the invoker maintains a history of these
commands. The undo and redo operations simply navigate this history, executing or undoing
commands as needed.
111
Example 4 - Real life use case of Command Design Pattern in
Python : Task Scheduling:
Let's design a simple task scheduler using the Command Design Pattern. This scheduler will allow
tasks to be scheduled for execution after a certain delay.

import time
from abc import ABC, abstractmethod
from threading import Timer

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class PrintMessageCommand(Command):
def __init__(self, message):
self.message = message

def execute(self):
print(self.message)

class BackupDatabaseCommand(Command):
def execute(self):
# Simulating database backup
print("Database backed up successfully!")

# Scheduler (Invoker)
class TaskScheduler:
def __init__(self):
self.tasks = []

def schedule(self, delay, command):


timer = Timer(delay, command.execute)
self.tasks.append(timer)
timer.start()

def cancel(self, command):


task = next((t for t in self.tasks if t.function == command.execute),
None)
if task:
task.cancel()
self.tasks.remove(task)

# Client Code
scheduler = TaskScheduler()

# Schedule a message to be printed after 5 seconds


cmd1 = PrintMessageCommand("5 seconds have passed!")
scheduler.schedule(5, cmd1)

112
# Schedule a database backup after 10 seconds
cmd2 = BackupDatabaseCommand()
scheduler.schedule(10, cmd2)

# Let's simulate some delay to see the tasks being executed


time.sleep(12)

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have an execute() method.

📌 Concrete Commands: The PrintMessageCommand and BackupDatabaseCommand classes


implement the Command interface. They encapsulate specific tasks like printing a message or
backing up a database.

📌 Scheduler (Invoker) ( TaskScheduler class): This class manages the scheduling and
execution of tasks. It has: - A tasks list to store scheduled tasks. - A schedule() method to
schedule a command for execution after a certain delay. - A cancel() method to cancel a
scheduled task.

📌 Client Code: This is where everything is tied together. - A TaskScheduler object is created. -
Commands are created and scheduled for execution using the scheduler.

The Timer class from the threading module is used to simulate the scheduling. When a
command is scheduled, a new timer is created with the specified delay and the command's
execute() method as the callback. When the timer expires, the command is executed.

This design allows for easy scheduling and execution of tasks. Each task is encapsulated as a
command, and the scheduler manages the execution of these commands based on the specified
delays.

Example 5 - Real life use case of Design Pattern in Python :


Macro Recording:
Let's design a simple system that simulates a graphic design software where users can draw
shapes and change colors. We'll implement macro recording functionality using the Command
Design Pattern.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class DrawCircleCommand(Command):
def __init__(self, canvas, position):
self.canvas = canvas
self.position = position

113
def execute(self):
self.canvas.draw("Circle", self.position)

class DrawSquareCommand(Command):
def __init__(self, canvas, position):
self.canvas = canvas
self.position = position

def execute(self):
self.canvas.draw("Square", self.position)

class ChangeColorCommand(Command):
def __init__(self, canvas, color):
self.canvas = canvas
self.color = color

def execute(self):
self.canvas.change_color(self.color)

# Receiver
class Canvas:
def __init__(self):
self.color = "White"

def draw(self, shape, position):


print(f"Drew a {self.color} {shape} at {position}.")

def change_color(self, color):


self.color = color
print(f"Changed color to {color}.")

# Macro & Invoker


class Macro:
def __init__(self):
self.commands = []

def add_command(self, command):


self.commands.append(command)

def run(self):
for command in self.commands:
command.execute()

# Client Code
canvas = Canvas()
macro = Macro()

# Record a series of actions


macro.add_command(ChangeColorCommand(canvas, "Red"))
macro.add_command(DrawCircleCommand(canvas, (5, 5)))
macro.add_command(ChangeColorCommand(canvas, "Blue"))
macro.add_command(DrawSquareCommand(canvas, (10, 10)))

# Play back the recorded actions


macro.run()

114
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.

📌 Concrete Commands: The DrawCircleCommand , DrawSquareCommand , and


ChangeColorCommand classes implement the Command interface. They encapsulate specific
actions like drawing shapes or changing colors.

📌 Receiver ( Canvas class): This simulates a canvas where shapes can be drawn and colors can
be changed. It has methods to draw shapes ( draw() ) and change colors ( change_color() ).

📌 Macro & Invoker ( Macro class): This class represents a macro that can record and play back
a series of commands. It has: - A commands list to store recorded commands. - An add_command()
method to add a command to the macro. - A run() method to play back the recorded commands
in order.

📌 Client Code: This is where everything is tied together. - A Canvas object is created. - A Macro
object is created. - Commands are created and added to the macro to simulate recording a series
of actions. - The macro is then run to play back the recorded actions.

This design allows for easy recording and playback of macros. Each action is encapsulated as a
command, and the macro maintains a list of these commands. When the macro is run, it simply
plays back the commands in the order they were recorded.

Example 6 - Real life use case of Command Design Pattern for


E-commerce Order System:
Let's design an e-commerce system where customers can place orders, modify them, or cancel
them. The Command Design Pattern will be used to encapsulate each of these actions, allowing
for easy tracking, logging, and potential undo functionalities.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class PlaceOrderCommand(Command):
def __init__(self, order, product, quantity):
self.order = order
self.product = product
self.quantity = quantity

def execute(self):
self.order.place(self.product, self.quantity)

class ModifyOrderCommand(Command):
def __init__(self, order, product, new_quantity):
self.order = order
115
self.product = product
self.new_quantity = new_quantity

def execute(self):
self.order.modify(self.product, self.new_quantity)

class CancelOrderCommand(Command):
def __init__(self, order):
self.order = order

def execute(self):
self.order.cancel()

# Receiver
class Order:
def __init__(self):
self.items = {}

def place(self, product, quantity):


self.items[product] = quantity
print(f"Ordered {quantity} units of {product}.")

def modify(self, product, new_quantity):


if product in self.items:
self.items[product] = new_quantity
print(f"Modified order: {product} now has {new_quantity} units.")
else:
print(f"No order for {product} to modify.")

def cancel(self):
self.items.clear()
print("Order has been canceled.")

# Client Interface (E-commerce platform)


class ECommercePlatform:
def __init__(self):
self.orders = []

def process_command(self, command):


command.execute()
self.orders.append(command)

# Client Code
order1 = Order()
platform = ECommercePlatform()

# Place an order
cmd1 = PlaceOrderCommand(order1, "Laptop", 1)
platform.process_command(cmd1)

# Modify the order


cmd2 = ModifyOrderCommand(order1, "Laptop", 2)
platform.process_command(cmd2)

# Cancel the order


cmd3 = CancelOrderCommand(order1)
116
platform.process_command(cmd3)

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.

📌 Concrete Commands: The PlaceOrderCommand , ModifyOrderCommand , and


CancelOrderCommand classes implement the Command interface. They encapsulate specific
actions like placing an order, modifying it, or canceling it.

📌 Receiver ( Order class): This represents an e-commerce order. It has methods to place an
order ( place() ), modify an existing order ( modify() ), and cancel an order ( cancel() ).

📌 Client Interface (E-commerce platform) ( ECommercePlatform class): This class represents


the e-commerce platform where orders are processed. It has: - A orders list to store processed
commands (orders). - A process_command() method to execute a command and add it to the list
of processed orders.

📌 Client Code: This is where everything is tied together. - An Order object is created. - An
ECommercePlatform object is created. - Commands are created and processed using the platform,
simulating placing, modifying, and canceling an order.

This design allows for easy tracking of orders and their modifications. Each action on an order is
encapsulated as a command, and the e-commerce platform maintains a list of these commands.
This can be useful for logging, analytics, and potentially implementing undo functionalities in the
future.

Example 7 - Real life use case of Command Design Pattern for


Restaurant Kitchen Automation:
Alright! Let's design a system for a restaurant's kitchen automation. In this system, chefs can
receive and prepare different types of dishes. The Command Design Pattern will be used to
encapsulate the preparation of each type of dish, allowing for easy tracking, logging, and potential
modifications to the preparation process.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class PreparePizzaCommand(Command):
def __init__(self, chef):
self.chef = chef

def execute(self):
self.chef.prepare_pizza()

class PreparePastaCommand(Command):
117
def __init__(self, chef):
self.chef = chef

def execute(self):
self.chef.prepare_pasta()

class PrepareSaladCommand(Command):
def __init__(self, chef):
self.chef = chef

def execute(self):
self.chef.prepare_salad()

# Receiver
class Chef:
def prepare_pizza(self):
print("Chef is preparing pizza...")

def prepare_pasta(self):
print("Chef is preparing pasta...")

def prepare_salad(self):
print("Chef is preparing salad...")

# Kitchen Interface (Invoker)


class Kitchen:
def __init__(self):
self.queue = []

def add_order(self, command):


self.queue.append(command)

def process_orders(self):
while self.queue:
order = self.queue.pop(0)
order.execute()

# Client Code
chef_john = Chef()
kitchen = Kitchen()

# Customers place orders


kitchen.add_order(PreparePizzaCommand(chef_john))
kitchen.add_order(PrepareSaladCommand(chef_john))
kitchen.add_order(PreparePastaCommand(chef_john))

# Kitchen processes orders


kitchen.process_orders()

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.

118
📌 Concrete Commands: The PreparePizzaCommand , PreparePastaCommand , and
PrepareSaladCommand classes implement the Command interface. They encapsulate specific
actions like preparing pizza, pasta, or salad.

📌 Receiver ( Chef class): This represents a chef in the kitchen. The chef has methods to prepare
different types of dishes: prepare_pizza() , prepare_pasta() , and prepare_salad() .

📌 Kitchen Interface (Invoker) ( Kitchen class): This class represents the kitchen where orders
are processed. It has: - A queue list to store orders (commands) that need to be processed. - An
add_order() method to add an order (command) to the queue. - A process_orders() method
to process (execute) all orders in the queue.

📌 Client Code: This is where everything is tied together. - A Chef object is created. - A Kitchen
object is created. - Orders (commands) are added to the kitchen queue, simulating customers
placing orders. - The kitchen processes (executes) the orders, simulating the chef preparing the
dishes.

This design allows for easy management of orders in the kitchen. Each type of dish is
encapsulated as a command, and the kitchen maintains a queue of these commands. As orders
come in, they're added to the queue, and the kitchen processes them in the order they were
received. This can be useful for tracking, logging, and ensuring that dishes are prepared in the
correct sequence.

Example 7 - Real life use case of Command Design Pattern for


File Utilities:
Let's design a basic file utility system using the Command Design Pattern.

import os
from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class CreateFileCommand(Command):
def __init__(self, filepath, content=None):
self.filepath = filepath
self.content = content

def execute(self):
with open(self.filepath, 'w') as file:
if self.content:
file.write(self.content)
print(f"File '{self.filepath}' created.")

class ReadFileCommand(Command):
def __init__(self, filepath):
self.filepath = filepath

119
def execute(self):
with open(self.filepath, 'r') as file:
print(file.read())

class RenameFileCommand(Command):
def __init__(self, old_filepath, new_filepath):
self.old_filepath = old_filepath
self.new_filepath = new_filepath

def execute(self):
os.rename(self.old_filepath, self.new_filepath)
print(f"File '{self.old_filepath}' renamed to '{self.new_filepath}'.")

class DeleteFileCommand(Command):
def __init__(self, filepath):
self.filepath = filepath

def execute(self):
os.remove(self.filepath)
print(f"File '{self.filepath}' deleted.")

# Client Code
create_cmd = CreateFileCommand('sample.txt', 'Hello, Command Pattern!')
create_cmd.execute()

read_cmd = ReadFileCommand('sample.txt')
read_cmd.execute()

rename_cmd = RenameFileCommand('sample.txt', 'new_sample.txt')


rename_cmd.execute()

delete_cmd = DeleteFileCommand('new_sample.txt')
delete_cmd.execute()

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.

📌 Concrete Commands: - CreateFileCommand : This command creates a file and optionally


writes content to it. - ReadFileCommand : This command reads the content of a file and prints it. -
RenameFileCommand : This command renames a file. - DeleteFileCommand : This command deletes
a file.

Each of these commands encapsulates a specific file operation.

📌 Client Code: This is where everything is tied together. Commands are created and executed,
simulating the operations of creating, reading, renaming, and deleting a file.

Benefits of Using the Command Design Pattern:


1. Encapsulation: Each file operation is encapsulated in its own command class. This makes it
easy to add new file operations in the future without changing existing code.

120
2. Decoupling: The file operations (commands) are decoupled from the code that invokes
them. This means that the client code doesn't need to know the specifics of how each
operation is implemented.

3. Flexibility: With this design, it's easy to add features like undo/redo, logging, or macro
recording. For instance, you could maintain a history of commands and provide an undo
feature by reversing each command's action.

4. Reusability: Commands can be reused in different parts of the application or even in


different applications. For example, the ReadFileCommand can be used wherever file reading
functionality is needed.

5. Consistency: By using commands, you ensure that all file operations are executed in a
consistent manner, following the same pattern.

Overall, the Command Design Pattern provides a structured and scalable way to handle various
operations, making the system easier to maintain and extend.

Example 8 - Real life use case of Command Design Pattern for


Video Editing Software:
Let's design a system for a video editing software using the Command Design Pattern. In this
system, users can perform various editing actions on a video, such as cutting, adding effects,
adjusting brightness, and so on.

from abc import ABC, abstractmethod

# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass

# Concrete Commands
class CutCommand(Command):
def __init__(self, video_editor, start_time, end_time):
self.video_editor = video_editor
self.start_time = start_time
self.end_time = end_time

def execute(self):
self.video_editor.cut(self.start_time, self.end_time)

class AddEffectCommand(Command):
def __init__(self, video_editor, effect_name):
self.video_editor = video_editor
self.effect_name = effect_name

def execute(self):
self.video_editor.add_effect(self.effect_name)

class AdjustBrightnessCommand(Command):
def __init__(self, video_editor, level):
self.video_editor = video_editor

121
self.level = level

def execute(self):
self.video_editor.adjust_brightness(self.level)

# Receiver
class VideoEditor:
def cut(self, start_time, end_time):
print(f"Cutting video from {start_time} to {end_time}.")

def add_effect(self, effect_name):


print(f"Adding {effect_name} effect to the video.")

def adjust_brightness(self, level):


print(f"Adjusting video brightness to level {level}.")

# Editing Suite (Invoker)


class EditingSuite:
def __init__(self):
self.history = []

def execute_command(self, command):


command.execute()
self.history.append(command)

# Client Code
video = VideoEditor()
suite = EditingSuite()

# Perform various editing actions


cut_cmd = CutCommand(video, "00:01:00", "00:02:00")
suite.execute_command(cut_cmd)

effect_cmd = AddEffectCommand(video, "Black & White")


suite.execute_command(effect_cmd)

brightness_cmd = AdjustBrightnessCommand(video, 70)


suite.execute_command(brightness_cmd)

Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.

📌 Concrete Commands: - CutCommand : This command cuts a segment from the video. -
AddEffectCommand : This command adds a specific effect to the video. -
AdjustBrightnessCommand : This command adjusts the brightness of the video.

Each of these commands encapsulates a specific video editing operation.

📌 Receiver ( VideoEditor class): This represents the video editing software. It has methods to
perform various editing actions on a video.

122
📌 Editing Suite (Invoker) ( EditingSuite class): This class represents the suite where editing
commands are executed. It has: - A history list to store executed commands. - An
execute_command() method to execute a command and add it to the history.

📌 Client Code: This is where everything is tied together. Commands are created and executed,
simulating the operations of cutting a segment, adding an effect, and adjusting brightness.

Benefits of Using the Command Design Pattern in this


Scenario:
1. Modularity: Each video editing operation is modularized into its own command class. This
makes the system organized and easy to extend.

2. Undo/Redo: With the history list in the EditingSuite , it's straightforward to implement
undo and redo functionalities by reversing or re-executing commands.

3. Batch Processing: Commands can be grouped together to apply multiple editing operations
at once, allowing for batch processing of videos.

4. Customization: Users can create custom editing sequences by chaining commands in


specific orders.

5. Consistency: By using commands, you ensure that all editing operations are executed in a
consistent manner, following the same pattern.

Overall, the Command Design Pattern provides a structured approach to handle various video
editing operations, making the software more flexible and user-friendly.

🐍🚀 The Facade design pattern in Python 🐍🚀

123
📌 The facade design pattern helps us to hide the internal complexity of our systems and expose
only what is necessary to the client through a simplified interface. In essence, a facade is an
abstraction layer implemented over an existing complex system. The main goal is to simplify the
client's interaction with a complex system by providing a higher-level interface that makes the
subsystem easier to use.

There are three points which are involved to understand this patter.

Facade class --- This class is for implementing interface which will be used by client class. This
class will use services implemented in system

System class --- Multiple system classes might be there in the system and each system class is for
specific purpose.

Client class --- Client class is using facade class to access functionality of system. It could be hard
to access system class directly so client is using facade class instead.

📌 Use Cases:
1. When you have a complex system with multiple modules and you want to provide a simple
interface to the client.

2. When you want to decouple a client from a complex subsystem.

3. When you want to layer your subsystems and want to ensure that each layer communicates
with only a few interfaces.

📌 Why it's important in Python: Python, being a high-level language, often deals with
abstracting complexities. Libraries and frameworks in Python often use the facade pattern to
provide a more Pythonic and user-friendly API to the users, while hiding the intricate details and
complexities.

124
Let's see an example WITH and then WITHOUT the "Facade
design pattern in Python"

1. Code without the Facade Design Pattern


Consider a scenario where we have a complex system that involves operations related to a
computer. The computer can start, run some applications, and shut down.

class CPU:
def freeze(self):
print("CPU is frozen")

def jump(self, position):


print(f"Jumping to position {position}")

def execute(self):
print("CPU is executing")

class Memory:
def load(self, position, data):
print(f"Loading data {data} at position {position}")

class HardDrive:
def read(self, lba, size):
return f"Reading {size} bytes from LBA {lba}"

If a client wants to start the computer, they would need to interact with all these subsystems in a
specific order:

cpu = CPU()
memory = Memory()
hard_drive = HardDrive()

cpu.freeze()
memory.load("0x00", hard_drive.read("0x00", "512"))
cpu.jump("0x00")
cpu.execute()

📌 The above approach has the following issues: - The client needs to interact with multiple
subsystems directly. - The order of operations is crucial, and the client needs to be aware of it. - If
any subsystem changes its interface or behavior, the client code will need to be updated.

2. Refactoring with the Facade Design Pattern


Now, let's implement the Facade design pattern to simplify the client's interaction with the
computer system.

125
class ComputerFacade:
def __init__(self):
self.cpu = CPU()
self.memory = Memory()
self.hard_drive = HardDrive()

def start(self):
self.cpu.freeze()
self.memory.load("0x00", self.hard_drive.read("0x00", "512"))
self.cpu.jump("0x00")
self.cpu.execute()

Now, the client can simply interact with the ComputerFacade to start the computer:

computer = ComputerFacade()
computer.start()

📌 Benefits of using the Facade design pattern: - The client interacts with a single, simplified
interface ( ComputerFacade ) rather than multiple subsystems. - The internal workings of the
subsystems are abstracted away from the client. - It's easier to maintain and modify the system
without affecting the client code.

📌 In summary, the Facade design pattern provides a unified interface to a set of interfaces in a
subsystem. It defines a higher-level interface that makes the subsystem easier to use, promoting
decoupling and cleaner code.

Let's delve deeper into how the refactored code with the
Facade design pattern addresses the issues of the original
code.
📌 Issue 1: Direct Interaction with Multiple Subsystems
In the original code, the client had to interact directly with multiple subsystems ( CPU , Memory ,
HardDrive ). This means that the client needed to have knowledge about the intricacies and
operations of each subsystem.

Solution with Facade Pattern: The ComputerFacade class encapsulates the interactions with the
subsystems. The client only interacts with the ComputerFacade , which internally manages the
interactions with the subsystems. This reduces the client's dependency on individual subsystems
and abstracts away the complexity.

📌 Issue 2: Order of Operations is Crucial


In the original code, the client needed to be aware of the specific order in which to call methods
on the subsystems. If the order was wrong, the system would not function correctly.

Solution with Facade Pattern: The ComputerFacade class ensures that the methods are called in
the correct order within its start method. The client doesn't need to worry about the order; they
just call the start method on the facade. This encapsulation ensures that the operations are
always executed in the correct sequence.

📌 Issue 3: Changes in Subsystem Affect Client Code


126
If any subsystem (e.g., CPU , Memory , HardDrive ) changed its interface or behavior, the client
code would need to be updated, leading to tight coupling between the client and the subsystems.

Solution with Facade Pattern: With the facade in place, any changes to the subsystems can be
managed within the facade itself. The client remains unaffected as it only interacts with the
facade's interface. This promotes loose coupling, where changes in one part of the system don't
ripple through and affect other parts.

📌 Additional Benefits of the Facade Pattern in the Refactored Code:


1. Simplified Interface: The client doesn't need to know about the various methods of the
subsystems. They only see and interact with the simplified methods provided by the facade.

2. Flexibility: In the future, if we want to add more operations or change the way the computer
starts, we can do so within the facade without affecting the client code.

3. Maintainability: With the separation of concerns, it's easier to maintain and modify the
system. If a subsystem needs an upgrade or modification, it can be done without touching
the client code.

In essence, the Facade design pattern in the refactored code provides a shield to the client from
the complexities of the subsystems, ensuring a smooth and simplified interaction.

📌 Real-life Use-Case: Imagine you're building a home automation system. This system has
multiple subsystems like lighting, security, heating, etc. Each of these subsystems can have its own
set of methods and complexities. Instead of letting the client deal with each subsystem separately,
you can provide a facade that offers simplified methods to perform common tasks.

class LightingSystem:
def turn_on(self):
print("Lights turned on")
def turn_off(self):
print("Lights turned off")

class SecuritySystem:
def activate(self):
print("Security system activated")
def deactivate(self):
print("Security system deactivated")

class HeatingSystem:
def set_temperature(self, temp):
print(f"Temperature set to {temp}°C")

class HomeAutomationFacade:
def __init__(self):
self.lighting = LightingSystem()
self.security = SecuritySystem()
self.heating = HeatingSystem()

def leave_home(self):
self.lighting.turn_off()
self.security.activate()
self.heating.set_temperature(18)
# Set to energy-saving mode
127
def arrive_home(self):
self.lighting.turn_on()
self.security.deactivate()
self.heating.set_temperature(22)
# Comfortable temperature

# Client code
home_system = HomeAutomationFacade()
home_system.leave_home()
home_system.arrive_home()

📌 Explanation of the Code: 1. We have three subsystems: LightingSystem , SecuritySystem ,


and HeatingSystem . Each has its own methods and functionalities. 2. The
HomeAutomationFacade class encapsulates these subsystems. It provides simplified methods like
leave_home and arrive_home that internally call methods from the subsystems. 3. The client,
which in this case is the user of the HomeAutomationFacade , doesn't need to know about the
individual subsystems or their methods. They just interact with the simplified interface provided
by the facade. 4. When the client calls leave_home , the facade ensures that the lights are turned
off, the security system is activated, and the heating is set to an energy-saving mode. Similarly, the
arrive_home method sets everything up for a comfortable return.

📌 Under-the-Hood: The facade pattern doesn't change the subsystems; it only provides a
simplified view of them. This is beneficial because it promotes decoupling. The client is decoupled
from the subsystems, meaning changes in the subsystems won't affect the client as long as the
facade's interface remains consistent. This is a principle in software design known as the "Law of
Demeter" or the "principle of least knowledge", which promotes minimal knowledge of an object
about other connected objects.

In Python, the facade pattern is often seen in libraries and frameworks where the internal
workings are abstracted away, and only a simple, intuitive, and Pythonic API is exposed to the end-
users. This ensures that users can quickly and effectively use the tool without needing to
understand its complexities.

Let's break down the provided code example and see how it
aligns with the principles and requirements of the facade
pattern.
📌 Unified Interface: The primary goal of the facade pattern is to provide a unified interface to a
set of interfaces in a subsystem. In the provided example, the HomeAutomationFacade class
serves as this unified interface. It wraps around multiple subsystems ( LightingSystem ,
SecuritySystem , and HeatingSystem ) and offers a simpler, more intuitive set of methods
( leave_home and arrive_home ) for the client to use.

class HomeAutomationFacade:
def __init__(self):
self.lighting = LightingSystem()
self.security = SecuritySystem()
self.heating = HeatingSystem()

def leave_home(self):
128
self.lighting.turn_off()
self.security.activate()
self.heating.set_temperature(18)
# Set to energy-saving mode

def arrive_home(self):
self.lighting.turn_on()
self.security.deactivate()
self.heating.set_temperature(22)
# Comfortable temperature

📌 Abstraction of Complexities: The individual subsystems ( LightingSystem , SecuritySystem ,


and HeatingSystem ) each have their own methods and complexities. The client doesn't need to
know when to turn off the lights, activate the security, or set the heating to a specific temperature
when leaving the house. All these details are abstracted away by the facade's leave_home
method. Similarly, the arrive_home method abstracts the steps needed when someone arrives
home.

class LightingSystem:
def turn_on(self):
print("Lights turned on")
def turn_off(self):
print("Lights turned off")

class SecuritySystem:
def activate(self):
print("Security system activated")
def deactivate(self):
print("Security system deactivated")

class HeatingSystem:
def set_temperature(self, temp):
print(f"Temperature set to {temp}°C")

📌 Decoupling: One of the benefits of the facade pattern is the decoupling of the client from the
complex subsystems. In our example, the client interacts only with the HomeAutomationFacade
and remains unaware of the individual subsystems. This means that if there are changes or
updates to the subsystems (e.g., adding new methods or changing internal logic), the client code
remains unaffected as long as the facade's interface ( leave_home and arrive_home methods)
remains consistent.

📌 Flexibility and Ease of Use: The facade pattern makes it easier for clients to use the system.
Instead of calling multiple methods from different subsystems, the client has a single point of
interaction with the system through the facade. In our example, the client doesn't need to
remember the sequence of operations or which subsystem does what. They just call leave_home
when leaving and arrive_home when arriving.

📌 Maintainability: If in the future, we decide to add a new subsystem (e.g., an air conditioning
system) or modify the behavior of an existing one, we can do so within the facade without
affecting the client code. This encapsulation ensures that our system remains maintainable and
scalable.

129
📌 Consistency: The facade ensures that every time a client calls a method like leave_home , the
same sequence of actions is executed consistently. This prevents potential errors or
inconsistencies that might arise if the client had to manually manage each subsystem.

In summary, the provided code example encapsulates the complexities of multiple subsystems
within a unified interface, adhering to the principles of the facade pattern. It offers a simplified
and consistent interaction for the client, promotes decoupling, and ensures maintainability and
flexibility for future changes.

Example 2 - Check the below code and let me understand in


detail and step by step how its implementing the facade
design system

# pylint: disable=too-few-public-methods
# "The Facade pattern concept"
class SubSystemClassA:
"A hypothetically complicated class"
@staticmethod
def method():
"A hypothetically complicated method"
return "A"

class SubSystemClassB:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value

class SubSystemClassC:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value

class Facade():
"A simplified facade offering the services of subsystems"
@staticmethod
def sub_system_class_a():
"Use the subsystems method"
return SubSystemClassA().method()

@staticmethod
def sub_system_class_b(value):
"Use the subsystems method"
return SubSystemClassB().method(value)

@staticmethod
130
def sub_system_class_c(value):
"Use the subsystems method"
return SubSystemClassC().method(value)

# The Client
# call potentially complicated subsystems directly
print(SubSystemClassA.method())
print(SubSystemClassB.method("B"))
print(SubSystemClassC.method({"C": [1, 2, 3]}))

# or use the simplified facade


print(Facade().sub_system_class_a())
print(Facade().sub_system_class_b("B"))
print(Facade().sub_system_class_c({"C": [1, 2, 3]}))

""" OUTPUTS
A
B
{'C': [1, 2, 3]}
A
B
{'C': [1, 2, 3]}

"""

Alright, let's dissect the provided code step by step to understand its implementation of the facade
design pattern.

📌 Sub-Systems: The code starts by defining three sub-systems: SubSystemClassA ,


SubSystemClassB , and SubSystemClassC . Each of these classes represents a hypothetical
complex system with its own methods.

1. SubSystemClassA has a static method method that returns the string "A".

2. SubSystemClassB has a static method method that takes a value and returns it.

3. SubSystemClassC is similar to SubSystemClassB in that it has a static method method that


takes a value and returns it.

📌 Façade Class: The Facade class is where the facade pattern is implemented. This class
provides a simplified interface to the methods of the sub-systems. Instead of clients having to
interact with each sub-system directly, they can use the methods provided by the Facade class.

1. sub_system_class_a method: This method calls the method of SubSystemClassA and


returns its result.

2. sub_system_class_b method: This method takes a value, calls the method of


SubSystemClassB with that value, and returns the result.

3. sub_system_class_c method: This method is similar to sub_system_class_b , but it


interacts with SubSystemClassC .

📌 Client Interaction: The client (or the user of these classes) has two ways to interact with the
sub-systems:

1. Direct Interaction: The client can call the methods of the sub-systems directly. This is
demonstrated in the lines:
131
print(SubSystemClassA.method())
print(SubSystemClassB.method("B"))
print(SubSystemClassC.method({"C": [1, 2, 3]}))

Here, the client is directly accessing each sub-system and its methods.
2. Through the Façade: The client can use the Facade class to interact with the sub-systems.
This is demonstrated in the lines:

print(Facade().sub_system_class_a())
print(Facade().sub_system_class_b("B"))
print(Facade().sub_system_class_c({"C": [1, 2, 3]}))

Here, the client is using the simplified interface provided by the Facade class to achieve the
same results as the direct interaction.

📌 Outputs: The outputs of both direct interaction and interaction through the facade are the
same, as shown in the provided output. This demonstrates that the facade provides the same
functionality as the direct interaction but offers a more simplified and unified interface.

In above implementation, why its using the @staticmethod


First, What does the @staticmethod decorator generally do?
📌 The @staticmethod decorator in Python is used to define a static method within a class. A
static method doesn't depend on class instance attributes or methods. This means that it can't
modify the class state or access any instance-specific data. It's a method that belongs to the class
rather than any particular object instance.

📌 Because a static method doesn't depend on instance attributes, it can be called on the class
itself, without creating an instance. For example, if you have a class MyClass with a static method
my_method , you can call it like this: MyClass.my_method() .

In the provided code, @staticmethod is used to define methods that don't require access to
instance-specific data or methods. This means that these methods can be called on the class itself,
without creating an instance of the class.

class SubSystemClassA:
"A hypothetically complicated class"
@staticmethod
def method():
"A hypothetically complicated method"
return "A"

class SubSystemClassB:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value

132
The reason for using @staticmethod in this context:

1. Statelessness: The methods in the subsystem classes ( SubSystemClassA ,


SubSystemClassB , and SubSystemClassC ) and the Facade class don't rely on any instance-
specific state. They don't modify or access any instance attributes. Thus, there's no need for
them to be instance methods.

2. Simplicity: By making these methods static, you can call them directly on the class without
creating an instance. This makes the client code simpler and more intuitive. For example, the
client can call SubSystemClassA.method() directly without first creating an instance of
SubSystemClassA .

However, it's worth noting that in the Facade methods, instances of the subsystem classes are
being created to call their methods ( SubSystemClassA().method() ). This is redundant since the
subsystem methods are static and can be called directly on the class. The more streamlined
approach would be SubSystemClassA.method() without creating an instance.

In summary, @staticmethod is used in this code to define methods that don't rely on instance-
specific data or methods, allowing for direct method calls on the class itself.

📌 Conclusion: The provided code is a clear implementation of the facade design pattern. The
Facade class abstracts the complexities and specifics of the sub-systems, offering a unified and
simplified interface for the client. The client can choose to interact directly with the sub-systems or
use the facade, but the facade's main advantage is that it provides a consistent and
straightforward way to access the functionalities of the sub-systems.

What did I mean by "Statelessness"?


📌 When I referred to "statelessness," I meant that the methods in question do not rely on or alter
any attributes that are specific to an instance of the class. In other words, these methods don't
need to read or write any instance-specific data to do their job.

📌 In object-oriented programming, "state" usually refers to the data stored in object attributes.
For instance, if you have a Car class, the "state" might include attributes like color , make , and
speed . Methods that change these attributes or behave differently based on these attributes are
considered "stateful."

📌 In the provided code, the methods within SubSystemClassA , SubSystemClassB ,


SubSystemClassC , and Facade don't use or modify any instance-specific attributes. They perform
actions that are not dependent on the state of an object instance. That's why they are defined as
static methods.

Why is this relevant to the facade pattern example?


📌 In the context of the facade pattern, the methods in the Facade class and the subsystem
classes are serving as simplified interfaces to more complex operations. These operations, as
defined, don't need to access or modify any instance-specific data. Therefore, making them static
methods makes sense.

📌 By making them static, the code signals to the reader that these methods are utility-like
methods that don't rely on the state of an instance. This can make the code easier to understand
and reason about.

I hope this clears up any confusion! Would you like to know more about any specific part?

133
Example 2 - Real life use case of Design Pattern in Python
Let's consider a real-life scenario involving a multimedia system in a smart home. This system can
control the TV, the audio system, and even streaming services. Each of these components can
have its own set of operations and complexities. We'll use the facade pattern to simplify the
interaction with this multimedia system.

# Subsystems

class Television:
def turn_on(self):
print("TV turned on")

def turn_off(self):
print("TV turned off")

def set_channel(self, channel):


print(f"TV channel set to {channel}")

class AudioSystem:
def turn_on(self):
print("Audio system turned on")

def turn_off(self):
print("Audio system turned off")

def set_volume(self, level):


print(f"Volume set to {level}")

class StreamingService:
def login(self, username, password):
print(f"Logged in as {username}")

def search_movie(self, movie_name):


print(f"Searching for {movie_name}")

def play_movie(self, movie_name):


print(f"Playing {movie_name}")

# Facade

class MultimediaSystemFacade:
def __init__(self):
self.tv = Television()
self.audio = AudioSystem()
self.stream = StreamingService()

def watch_movie(self, username, password, movie_name):


print("Setting up to watch a movie...")
self.tv.turn_on()
self.audio.turn_on()
self.audio.set_volume(50)
134
self.stream.login(username, password)
self.stream.search_movie(movie_name)
self.stream.play_movie(movie_name)

def end_movie(self):
print("Shutting down after watching movie...")
self.stream.play_movie("Stopping current movie")
self.tv.turn_off()
self.audio.turn_off()

# Client code

multimedia_system = MultimediaSystemFacade()
multimedia_system.watch_movie("john_doe", "password123", "Inception")
print("\nMovie ended or interrupted by user.\n")
multimedia_system.end_movie()

📌 Explanation:
1. Subsystems:

Television : Represents the TV with operations to turn it on/off and set a channel.

AudioSystem : Represents the audio or sound system with operations to turn it on/off
and set the volume.

StreamingService : Represents a streaming platform (like Netflix) where you can log in,
search for a movie, and play it.

2. Facade - MultimediaSystemFacade:

This class encapsulates the complexities of the three subsystems. It provides two main
methods: watch_movie and end_movie .

watch_movie : Sets up everything for watching a movie. It turns on the TV and audio
system, logs into the streaming service, searches for the desired movie, and plays it.

end_movie : Used when the movie ends or is interrupted. It stops the movie, turns off
the TV, and shuts down the audio system.

3. Client Code:

The client wants to watch a movie. Instead of interacting with each subsystem
separately (TV, audio system, streaming service), the client uses the
MultimediaSystemFacade to watch and end a movie. This simplifies the process and
ensures that all steps are executed in the correct order.

This example demonstrates how the facade pattern can be used to simplify a complex process by
providing a unified interface that abstracts the underlying complexities.

Example 2 - Real life use case of Design Pattern in Python

135
Example 2 - Real life use case of Design Pattern in Python

An example from real-life github well known code base of


the facade design pattern in python
The Facade design pattern provides a unified interface to a set of interfaces in a subsystem. It
defines a higher-level interface that makes the subsystem easier to use.

One of the most well-known Python projects on GitHub is the requests library, which provides
methods to send HTTP requests. While requests doesn't strictly implement the Facade pattern in
the classical sense, it does simplify the process of making HTTP requests in Python by abstracting
away the complexities of lower-level libraries like http.client or urllib .

Here's a simplified view of how requests can be seen as a facade:

1. Without requests , making an HTTP GET request might look something like this using
http.client :

import http.client

conn = http.client.HTTPSConnection("www.example.com")
conn.request("GET", "/")
response = conn.getresponse()
data = response.read()
print(data)
conn.close()

2. With requests , the same HTTP GET request is simplified:

import requests

response = requests.get("https://www.example.com/")
print(response.text)

In the above example, the requests.get() method acts as a facade that hides the underlying
complexity of establishing a connection, sending the request, and retrieving the response. The
user doesn't need to know about the details of http.client or any other underlying library --
they just use the simplified interface provided by requests .

While this isn't a textbook example of the Facade pattern, it demonstrates the core principle:
providing a simpler, unified interface to a more complex underlying system.

You can find its GitHub repository here

https://github.com/psf/requests

Once you're there, you can navigate through the source code to see how the library abstracts
away the complexities of making HTTP requests in Python. The main logic is contained within the
requests folder in the repository. The api.py file, in particular, provides the high-level functions
like get() , post() , etc., that most users interact with.

from . import sessions


136
def request(method, url, **kwargs):
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)

def get(url, params=None, **kwargs):


r"""Sends a GET request.

:param url: URL for the new :class:`Request` object.


:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""

return request("get", url, params=params, **kwargs)

🐍🚀 Flyweight design pattern in Python 🐍🚀

What is the flyweight pattern?

The flyweight pattern addresses performance challenges in object-oriented systems due to the
cost of object instantiation. These challenges often arise in resource-constrained environments
like smartphones or in systems with a vast number of concurrent objects and users. This pattern
promotes memory efficiency by maximizing resource sharing among similar objects.

137
Creating a new object requires additional memory allocation. While virtual memory theoretically
offers limitless memory, practical constraints exist. If a system's physical memory is fully utilized, it
swaps data with secondary storage, typically an HDD, leading to performance degradation.
Although SSDs outperform HDDs, they aren't universally adopted and won't fully replace HDDs in
the near future. Performance isn't solely about memory. For instance, graphics applications, such
as video games, need to swiftly display 3D content like dense forests or populated urban scenes.

Without data sharing, rendering each 3D object separately would be inefficient. Therefore, instead
of relying on hardware upgrades, software engineers should employ strategies like the flyweight
pattern to optimize memory and boost performance by facilitating data sharing among alike
objects.

Instead of creating thousands of objects that share common attributes, and result in a situation
where a large amount of memory or other resources are used, you can modify your classes to
share multiple instances simultaneously by using some kind of reference to the shared object
instead.

The best example to describe this is a document containing many words and sentences and made
up of many letters. Rather than storing a new object for each individual letter describing its font,
position, colour, padding and many other potential things. You can store just a lookup id of a
character in a collection of some sort and then dynamically create the object with its proper
formatting etc., only as you need to.

The Flyweight pattern is all about sharing to save resources, especially in contexts where objects
have a lot of shared data. Here are some real-world examples:

1. Text Editor:

Shared State (Intrinsic): Each character's glyph representation. For instance, the letter
'A' in font 'Times New Roman' at size '12' will always look the same.

Unique State (Extrinsic): The position of each character in the document, its specific
color if highlighted, etc.

2. Train Reservation System:

Shared State (Intrinsic): The blueprint of a train model, like the seating arrangement,
number of coaches, etc.

Unique State (Extrinsic): The passengers in each seat, the destination of each train
instance, current speed, etc.

3. Building a Virtual World or MMO (Massively Multiplayer Online game):

Shared State (Intrinsic): The models for buildings, trees, creatures, and other static
elements that are common throughout the world.

Unique State (Extrinsic): The position, rotation, and scale of each instance of these
models in the world.

4. Web System Caching:

Shared State (Intrinsic): Common static assets like logos, standard icons, common
scripts, or stylesheets.

Unique State (Extrinsic): The context in which these assets are loaded, like the specific
user, the page they're on, etc.

5. Airport Check-in Kiosk:

138
Shared State (Intrinsic): The software and UI/UX design common to all kiosks.

Unique State (Extrinsic): The current user's flight details, their personal information,
the specific ads or promotions they might see.
6. Digital Art Software:

Shared State (Intrinsic): Common brushes, patterns, and textures that artists can use.

Unique State (Extrinsic): The specific artwork, the layers, the modifications made using
the brushes, etc.

7. E-commerce Platforms:

Shared State (Intrinsic): Product templates, especially for products that have common
designs but different customizations (e.g., T-shirts with different prints).

Unique State (Extrinsic): The specific customizations, the user who ordered, the
quantity, etc.

In all these examples, the Flyweight pattern can be employed to ensure that the shared data
(intrinsic state) is stored once and reused, while the unique data (extrinsic state) is managed
separately. This approach can lead to significant memory savings, especially in systems where the
number of objects can be very large.

Components:

Flyweight: This is the shared object. It contains the shared state (intrinsic state) and methods
to manipulate this state.

ConcreteFlyweight: This is a subclass of Flyweight and includes the specific shared objects.

FlyweightFactory: This is responsible for creating and managing the Flyweight objects. It
ensures that flyweights are shared properly.

Client: This uses the Flyweight factory to request instances.

4. Intrinsic vs. Extrinsic State: The state of a flyweight object can be divided into two:

Intrinsic State: This is the shared state, stored in the Flyweight object. It's independent of
the Flyweight's context, meaning it doesn't change and can be shared across multiple
contexts.

Extrinsic State: This is the non-shared state, which is stored or computed by client objects.
The client objects pass this state to the Flyweight when they invoke its methods.

A perfect example of the Flyweight Pattern is the Python intern() function. It's a builtin in Python 2
which was moved into the sys module in Python 3. When you pass it a string, it returns an exactly
equal string. Its advantage is that it saves space: no matter how many different string objects you
pass it for a particular value like 'abccdz', it returns the same 'abccdz' object each time.

Let's see an example WITH and then WITHOUT the "Flyweight


design pattern in Python"
📌 Without Flyweight Pattern:
Consider a scenario where we're building a game that has thousands of trees. Each tree has some
common attributes like color , texture , and height . Without the Flyweight pattern, we might
represent each tree as an individual object, even if many trees share the same attributes.
139
class Tree:
def __init__(self, color, texture, height):
self.color = color
self.texture = texture
self.height = height

def display(self, x, y):


print(f"Displaying tree of color {self.color} and height {self.height} at
position ({x}, {y})")

# Creating 1000 trees


trees = []
for _ in range(1000):
trees.append(Tree("green", "rough", 5))

# Displaying all trees


for tree in trees:
tree.display(10, 20)

📌 Issues: 1. Even if two trees have the same attributes, they are stored as separate objects in
memory. 2. This approach is memory-inefficient, especially when dealing with thousands or
millions of objects.

📌 With Flyweight Pattern:


The Flyweight pattern suggests that we separate the intrinsic (shared) state from the extrinsic
(unique) state. In our case, the intrinsic state includes attributes like color , texture , and
height , while the extrinsic state might be the x and y position of the tree.

class TreeType:
_instances = {}

def __new__(cls, color, texture, height):


if (color, texture, height) in cls._instances:
return cls._instances[(color, texture, height)]
instance = super(TreeType, cls).__new__(cls)
cls._instances[(color, texture, height)] = instance
return instance

def __init__(self, color, texture, height):


self.color = color
self.texture = texture
self.height = height

def display(self, x, y):


print(f"Displaying tree of color {self.color} and height {self.height} at
position ({x}, {y})")

class Tree:
def __init__(self, x, y, tree_type):
self.x = x
self.y = y
self.tree_type = tree_type

def display(self):
140
self.tree_type.display(self.x, self.y)

# Creating 1000 trees


tree_type = TreeType("green", "rough", 5)
trees = [Tree(10, 20, tree_type) for _ in range(1000)]

# Displaying all trees


for tree in trees:
tree.display()

📌 Advantages: 1. We've separated the intrinsic and extrinsic states. The intrinsic state
( TreeType ) is shared among all trees of the same type. 2. Memory usage is significantly reduced
since we're sharing the intrinsic state among multiple objects. 3. The TreeType class uses the
Singleton pattern to ensure that only one instance of a particular tree type exists.

By implementing the Flyweight pattern, we've optimized our game's memory usage without
compromising the functionality.

Let's delve into the details of how the refactored code with
the Flyweight design pattern addresses the issues of the
original code.
📌 Shared Intrinsic State: In the refactored code, we introduced a new class called TreeType .
This class represents the intrinsic state (shared attributes) of the trees, which includes color ,
texture , and height . By doing this, we ensure that for every unique combination of these
attributes, only one instance of TreeType is created and stored in memory.

📌 Singleton Pattern in TreeType : The TreeType class uses a dictionary called _instances to
keep track of the created instances based on the intrinsic attributes. The __new__ method checks
if an instance with the given attributes already exists. If it does, it returns the existing instance;
otherwise, it creates a new one. This ensures that we don't create multiple instances for trees with
the same attributes, thus saving memory.

📌 Separation of Extrinsic State: The Tree class now only holds the extrinsic state, which is the
x and y position of the tree. This separation allows us to have multiple Tree objects with
different positions but share the same TreeType instance if they have the same intrinsic
attributes. This approach drastically reduces the memory footprint, especially when dealing with a
large number of trees with similar attributes.

📌 Memory Efficiency: In the original code, if we had 1000 trees with the same attributes, we
would have 1000 separate objects, each storing its own color , texture , and height . In the
refactored code, these 1000 trees would share a single TreeType instance for their intrinsic
attributes, while only their positions (extrinsic state) would be stored separately. This results in a
significant reduction in memory usage.

📌 Scalability: The Flyweight pattern's memory-saving benefits become even more pronounced
as the number of objects increases. If our game were to have millions of trees, the original
approach would be highly inefficient, leading to excessive memory consumption. With the
Flyweight pattern, the memory usage would grow linearly with the number of unique tree types
(intrinsic states) rather than the total number of trees.

141
📌 Maintainability: By separating the intrinsic and extrinsic states, the code becomes more
modular and easier to maintain. If we need to add more attributes or methods related to the
intrinsic state of the trees, we can do so in the TreeType class without affecting the Tree class,
which deals with the extrinsic state.

In summary, the refactored code with the Flyweight design pattern efficiently addresses the
memory inefficiency issues of the original code by sharing the intrinsic state among similar
objects, leading to significant memory savings, improved scalability, and better maintainability.

Example-1 - Real-life Use Case:

Imagine a word processor application. Each character in a document could be an object. If you
consider the number of characters in a large document, the number of objects can be massive.
However, there are only a limited number of unique characters. So, instead of creating a new
object for every character in the document, you can use the Flyweight pattern to create an object
for each unique character and share it every time that character appears in the document.

Python Implementation:

class Character:
def __init__(self, char):
self.char = char # Intrinsic state

def render(self, font): # Extrinsic state passed as an argument


print(f"Rendering character {self.char} with font {font}")

class CharacterFactory:
_characters = dict()

def get_character(self, char):


if char not in self._characters:
char_obj = Character(char)
self._characters[char] = char_obj
return self._characters[char]

# Client code
factory = CharacterFactory()

char_a1 = factory.get_character('A')
char_a1.render('Font1')

char_a2 = factory.get_character('A')
char_a2.render('Font2')

char_a1 is char_a2 # returns True

In the above code, even though we requested the character 'A' twice, only one object was created
and shared between the two requests.

142
Let's dissect the provided code example to understand how it adheres to the principles and
requirements of the Flyweight design pattern:

1. Shared State (Intrinsic State): In the code, the Character class represents the Flyweight. The
character itself ( self.char ) is the intrinsic state. This state is shared among all instances of the
same character.

class Character:
def __init__(self, char):
self.char = char # Intrinsic state

2. Non-shared State (Extrinsic State): The render method of the Character class takes a font
parameter. This font is an example of the extrinsic state. It's not stored within the Flyweight object
but is instead passed by the client when needed.

def render(self, font): # Extrinsic state passed as an argument


print(f"Rendering character {self.char} with font {font}")

3. Flyweight Factory: The CharacterFactory class acts as the Flyweight Factory. It's responsible
for creating and managing Flyweight objects. When a client requests a character, the factory first
checks if it already has that character. If it does, it returns the existing object; otherwise, it creates
a new one. This ensures that for each unique character, only one object instance is ever created
and maintained.

class CharacterFactory:
_characters = dict()

def get_character(self, char):


if char not in self._characters:
char_obj = Character(char)
self._characters[char] = char_obj
return self._characters[char]

4. Object Sharing: In the client code, even though we request the character 'A' twice, the factory
ensures that only one object is created for 'A'. This object is then shared for both requests,
adhering to the principle of object sharing in the Flyweight pattern.

char_a1 = factory.get_character('A')
char_a1.render('Font1')

char_a2 = factory.get_character('A')
char_a2.render('Font2')

Both char_a1 and char_a2 point to the same object in memory.

5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (the
character itself) is kept within the Flyweight, while the extrinsic state (the font) is kept outside and
passed in when needed. This separation allows the intrinsic state to be shared while still providing
flexibility in how the object is used.

143
6. Memory Efficiency: By ensuring that only one object is created for each unique character, the
design minimizes memory usage. If you were to scale this example to represent a document with
thousands or millions of characters, the memory savings would be significant.

In conclusion, the provided code example adheres to the principles of the Flyweight design
pattern by ensuring shared use of objects with intrinsic states, managing object creation and
sharing through a factory, and efficiently handling extrinsic states outside the shared objects.

Example-2 - Real-life Use Case: Let's consider a graphics rendering system for a game or
simulation where we need to display trees in a forest. Each tree can have different types (e.g., oak,
pine, birch) and different states (e.g., position, size, age). However, the graphical representation
(texture, model) of each tree type is shared among all trees of the same type.

Flyweight Design Pattern for Rendering Trees in a Forest:

import random

# Flyweight class
class TreeType:
def __init__(self, name, texture, color):
self.name = name # Intrinsic state
self.texture = texture # Intrinsic state
self.color = color # Intrinsic state

def render(self, x, y, size, age): # Extrinsic state passed as arguments


print(f"Rendering a {self.name} tree of size {size} and age {age} at
({x}, {y}) with texture {self.texture} and color {self.color}")

# Flyweight Factory
class TreeFactory:
_tree_types = dict()

@classmethod
def get_tree_type(cls, name, texture, color):
if name not in cls._tree_types:
tree_type = TreeType(name, texture, color)
cls._tree_types[name] = tree_type
return cls._tree_types[name]

# Client class
class Forest:
def __init__(self):
self.trees = []

def plant_tree(self, x, y, name, texture, color, size, age):


tree_type = TreeFactory.get_tree_type(name, texture, color)
self.trees.append((tree_type, x, y, size, age))

def render_forest(self):
for tree_type, x, y, size, age in self.trees:
tree_type.render(x, y, size, age)

# Client code
forest = Forest()
144
# Planting 3 trees of the same type but different sizes and ages
forest.plant_tree(1, 2, "Pine", "PineTexture", "Green", 5, 10)
forest.plant_tree(5, 7, "Pine", "PineTexture", "Green", 6, 12)
forest.plant_tree(8, 9, "Pine", "PineTexture", "Green", 7, 15)

# Planting 2 trees of a different type


forest.plant_tree(3, 3, "Oak", "OakTexture", "DarkGreen", 8, 20)
forest.plant_tree(6, 6, "Oak", "OakTexture", "DarkGreen", 9, 25)

forest.render_forest()

Explanation:

1. TreeType is the Flyweight class. The intrinsic states (name, texture, color) are the shared
attributes among trees of the same type.

2. TreeFactory is the Flyweight Factory. It ensures that for each unique tree type, only one
TreeType object is created.

3. Forest is the client class. It uses the TreeFactory to get tree types and stores them along with
their extrinsic states (position, size, age).

4. In the client code, even though we plant multiple trees of the same type, only one TreeType
object is created for each unique tree type. This ensures memory efficiency, especially if we
were to render thousands or millions of trees.

This design allows us to efficiently render a vast forest with various tree types, sizes, and ages
while minimizing memory usage by sharing the graphical representation of each tree type.

Advantages:

Memory Savings: This is the primary advantage. By sharing objects, you can save a
significant amount of memory, especially in applications where object instantiation is costly.

Faster Operations: Fewer objects mean fewer instances to manage and faster lookups.

7. Disadvantages:

Complexity: Introducing the Flyweight pattern can increase the complexity of the system,
especially when managing the shared and non-shared states.

State Management: The management of the extrinsic state can become cumbersome, as it's
maintained outside the Flyweight.

When to Use:

When you have a large number of similar objects, and the memory cost is a concern.

When the majority of each object's state can be made extrinsic.

145
For Example-2 above, let's break down the provided the code
example in relation to the principles and requirements of
the Flyweight design pattern:
1. Shared State (Intrinsic State): The TreeType class in the code represents the Flyweight. The
attributes name , texture , and color are the intrinsic states. These attributes are shared among
all trees of the same type, texture, and color.

class TreeType:
def __init__(self, name, texture, color):
self.name = name # Intrinsic state
self.texture = texture # Intrinsic state
self.color = color # Intrinsic state

2. Non-shared State (Extrinsic State): The render method of the TreeType class takes x , y ,
size , and age as parameters. These represent the extrinsic states. They are unique to each tree
instance and are not stored within the Flyweight object but are instead passed by the client when
needed.

def render(self, x, y, size, age): # Extrinsic state passed as arguments


print(f"Rendering a {self.name} tree of size {size} and age {age} at ({x},
{y}) with texture {self.texture} and color {self.color}")

3. Flyweight Factory: The TreeFactory class acts as the Flyweight Factory. Its role is to manage
the creation and retrieval of TreeType objects. When a client requests a tree type, the factory first
checks if it already has that specific combination of name, texture, and color. If it does, it returns
the existing object; otherwise, it creates a new one. This ensures that for each unique
combination, only one object is ever created and maintained.

class TreeFactory:
_tree_types = dict()

@classmethod
def get_tree_type(cls, name, texture, color):
if name not in cls._tree_types:
tree_type = TreeType(name, texture, color)
cls._tree_types[name] = tree_type
return cls._tree_types[name]

4. Object Sharing: In the client code, when we plant multiple trees of the same type (e.g., "Pine"
with "PineTexture" and "Green" color), the factory ensures that only one TreeType object is
created for that combination. This object is then shared for all trees of that type, adhering to the
principle of object sharing in the Flyweight pattern.

forest.plant_tree(1, 2, "Pine", "PineTexture", "Green", 5, 10)


forest.plant_tree(5, 7, "Pine", "PineTexture", "Green", 6, 12)
forest.plant_tree(8, 9, "Pine", "PineTexture", "Green", 7, 15)

All the above trees use the same TreeType object for the combination of "Pine", "PineTexture",
and "Green".
146
5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (name,
texture, color) is kept within the Flyweight ( TreeType ), while the extrinsic state (x, y, size, age) is
kept outside and passed in when needed. This separation allows the intrinsic state to be shared
while still providing flexibility in how the object is used.

6. Memory Efficiency: By ensuring that only one object is created for each unique combination of
name, texture, and color, the design minimizes memory usage. This is especially beneficial if the
system has to manage a large forest with thousands or millions of trees.

Example 3 - Real life use case of Design Pattern in Python :


Vehicle Registration System
Let's delve into a Vehicle Registration System. In this system, multiple vehicles can have the
same make, model, and color, but they will have unique registration numbers, owners, and other
attributes.

Flyweight Design Pattern for Vehicle Registration System:

# Flyweight class
class VehicleModel:
def __init__(self, make, model, color):
self.make = make # Intrinsic state
self.model = model # Intrinsic state
self.color = color # Intrinsic state

def display(self, reg_number, owner):


print(f"Vehicle Make: {self.make}, Model: {self.model}, Color:
{self.color}, Registration Number: {reg_number}, Owner: {owner}")

# Flyweight Factory
class VehicleFactory:
_vehicle_models = dict()

@classmethod
def get_vehicle_model(cls, make, model, color):
key = (make, model, color)
if key not in cls._vehicle_models:
vehicle_model = VehicleModel(make, model, color)
cls._vehicle_models[key] = vehicle_model
return cls._vehicle_models[key]

# Client class
class VehicleRegistry:
def __init__(self):
self.vehicles = []

def register_vehicle(self, make, model, color, reg_number, owner):


vehicle_model = VehicleFactory.get_vehicle_model(make, model, color)
self.vehicles.append((vehicle_model, reg_number, owner))

def display_registry(self):
for vehicle_model, reg_number, owner in self.vehicles:
vehicle_model.display(reg_number, owner)

147
# Client code
registry = VehicleRegistry()

# Registering two vehicles of the same make, model, and color but different
owners and registration numbers
registry.register_vehicle("Toyota", "Corolla", "White", "XYZ-1234", "John Doe")
registry.register_vehicle("Toyota", "Corolla", "White", "ABC-5678", "Jane Smith")

# Registering another vehicle of a different make, model, and color


registry.register_vehicle("Honda", "Civic", "Black", "LMN-9012", "Alice Johnson")

registry.display_registry()

Explanation:

1. VehicleModel is the Flyweight class. The intrinsic states are the make, model, and color of
the vehicle. These attributes are shared among vehicles of the same make, model, and color.

2. VehicleFactory is the Flyweight Factory. It ensures that for each unique combination of
make, model, and color, only one VehicleModel object is created.

3. VehicleRegistry is the client class. It uses the VehicleFactory to get vehicle models and stores
them along with their unique attributes (registration number and owner).

4. In the client code, even though we register two vehicles of the same make, model, and color,
only one VehicleModel object is created for that combination. This ensures memory
efficiency.

This design allows us to efficiently manage a large vehicle registry while minimizing memory usage
by sharing the common attributes of vehicles.

Let's break down the provided code example to understand


how it adheres to the principles and requirements of the
Flyweight design pattern:
1. Shared State (Intrinsic State): In the code, the VehicleModel class represents the Flyweight.
The attributes make , model , and color are the intrinsic states. These attributes are shared
among all vehicles of the same make, model, and color.

class VehicleModel:
def __init__(self, make, model, color):
self.make = make # Intrinsic state
self.model = model # Intrinsic state
self.color = color # Intrinsic state

2. Non-shared State (Extrinsic State): The display method of the VehicleModel class takes
reg_number and owner as parameters. These represent the extrinsic states. They are unique to
each vehicle and are not stored within the Flyweight object but are instead passed by the client
when needed.

148
def display(self, reg_number, owner):
print(f"Vehicle Make: {self.make}, Model: {self.model}, Color: {self.color},
Registration Number: {reg_number}, Owner: {owner}")

3. Flyweight Factory: The VehicleFactory class acts as the Flyweight Factory. Its role is to
manage the creation and retrieval of VehicleModel objects. When a client requests a vehicle
model, the factory first checks if it already has that specific combination of make, model, and
color. If it does, it returns the existing object; otherwise, it creates a new one. This ensures that for
each unique combination, only one object is ever created and maintained.

class VehicleFactory:
_vehicle_models = dict()

@classmethod
def get_vehicle_model(cls, make, model, color):
key = (make, model, color)
if key not in cls._vehicle_models:
vehicle_model = VehicleModel(make, model, color)
cls._vehicle_models[key] = vehicle_model
return cls._vehicle_models[key]

4. Object Sharing: In the client code, even though we register two vehicles of the same make,
model, and color, the factory ensures that only one VehicleModel object is created for that
combination. This object is then shared for both registrations, adhering to the principle of object
sharing in the Flyweight pattern.

registry.register_vehicle("Toyota", "Corolla", "White", "XYZ-1234", "John Doe")


registry.register_vehicle("Toyota", "Corolla", "White", "ABC-5678", "Jane Smith")

Both registrations use the same VehicleModel object for the make, model, and color
combination of "Toyota", "Corolla", and "White".

5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (make,
model, color) is kept within the Flyweight, while the extrinsic state (registration number, owner) is
kept outside and passed in when needed. This separation allows the intrinsic state to be shared
while still providing flexibility in how the object is used.

6. Memory Efficiency: By ensuring that only one object is created for each unique combination of
make, model, and color, the design minimizes memory usage. This is especially beneficial if the
system has to manage a large number of vehicles.

In conclusion, the provided code example adheres to the principles of the Flyweight design
pattern by ensuring shared use of objects with intrinsic states, managing object creation and
sharing through a factory, and efficiently handling extrinsic states outside the shared objects.

149
🐍🚀 Object pool Pattern in Python 🐍🚀

A pattern that builds on Singleton is called an Object pool pattern. In this pattern, instead of being
able to use only one single object, you can use an object from a pool of objects. The pool size is set
depending on the use cases. Object pool pattern is commonly seen in applications that have
multiple incoming requests and need to communicate with the database quickly(e.g., backend
apps, stream processing). Having a pool of db connections allow incoming requests to
communicate with the DB, without having to create a new connection(takes longer) or having to
wait for a singleton object to finish serving other requests. However, note that the connections
must be returned to their initial state after use and before returning to the pool.

📌 The Object Pool Pattern is a creational design pattern that allows objects to be reused rather
than created and destroyed on demand. This is particularly useful when the instantiation of an
object is more expensive in terms of resources or time.

📌 The "pool" in the Object Pool Design Pattern refers to a collection of pre-instantiated objects
that are ready to be used. The idea is to have these objects available so that they can be quickly
borrowed and returned, avoiding the overhead of creating and destroying them repeatedly.

📌 The primary advantage of this pattern is performance optimization. By reusing objects that
have already been created, you save the overhead of re-instantiating them. This is especially
beneficial in scenarios where the cost of initializing an instance is high, the rate of instantiation of
a class is high, the instances are only needed for short periods of time, and instances are only
needed for specific and deterministic times.

📌 The Object Pool Pattern is often used in real-world scenarios like: - Database connection pools:
Creating a new database connection every time one is needed can be time-consuming. Instead, a
pool of connections is maintained. When a connection is needed, one is taken from the pool, and
when it's done, it's returned to the pool. - Thread pools: Threads are expensive to start and stop. A
thread pool is a collection of worker threads that efficiently execute asynchronous callbacks on
behalf of the application. - Memory allocation: In some systems, it's more efficient to allocate a
chunk of memory at once and then divvy it up among many objects, rather than allocating
memory for each object individually.

📌 One thing to remember is that when an object is returned to the pool, it should be reset to its
initial state, so it's ready to be used again without any lingering state from its previous use.

150
Let's see an example WITH and then WITHOUT the "Object
pool Pattern in Python"
1. Code without the Object Pool Pattern

Let's consider a simple database connection class:

import time

class SimpleDatabaseConnection:
def __init__(self):
self.id = id(self)
print(f"Created new connection with id: {self.id}")

def query(self, sql_command):


print(f"Connection {self.id} executing: {sql_command}")
time.sleep(1) # Simulating the time taken to execute a query

Now, let's simulate a scenario where multiple queries are executed:

def main_without_pool():
connections = [SimpleDatabaseConnection() for _ in range(5)]

for conn in connections:


conn.query("SELECT * FROM users")

📌 When you run main_without_pool() , you'll notice that for every query, a new connection is
created. This is inefficient, especially if creating a connection is resource-intensive or time-
consuming.

2. Refactoring with the Object Pool Pattern

Let's implement the Object Pool Pattern:

class ConnectionPool:
def __init__(self, max_size=5):
self._connections = [SimpleDatabaseConnection() for _ in range(max_size)]
self._in_use = []

def get_connection(self):
if not self._connections:
print("All connections are in use. Waiting...")
time.sleep(2) # Simulating waiting time
return self.get_connection()
conn = self._connections.pop()
self._in_use.append(conn)
return conn

def release_connection(self, conn):


self._in_use.remove(conn)
self._connections.append(conn)

Now, let's modify our main function to use the connection pool:

151
def main_with_pool():
pool = ConnectionPool(max_size=5)

connections = [pool.get_connection() for _ in range(5)]

for conn in connections:


conn.query("SELECT * FROM users")
pool.release_connection(conn)

📌 With the ConnectionPool class, we maintain a list of available connections. When a


connection is requested, it's taken from the available pool and marked as "in use".

📌 If all connections are in use and another connection is requested, the system will wait (in our
case, 2 seconds) and then try again.

📌 After a connection is done being used, it's important to release it back to the pool using
release_connection . This ensures that connections are reused, avoiding the overhead of
creating a new connection every time.

📌 The Object Pool Pattern, as demonstrated, helps in efficiently managing and reusing objects
(like database connections) that are expensive to create. This is especially beneficial in scenarios
where the rate of object creation is high, and objects are only needed for short periods.

By implementing the Object Pool Pattern, we've optimized our system to handle multiple requests
efficiently, reusing existing resources instead of continuously creating new ones.

Let's delve deep into the benefits and improvements


brought about by the Object Pool Pattern in the refactored
code.
📌 Resource Initialization Overhead: In the original code, every time a query was executed, a
new connection was created. Establishing a new connection, especially to databases or other
external systems, can be resource-intensive and time-consuming. This overhead can significantly
slow down applications, especially when multiple connections are needed in quick succession.

In the refactored code with the Object Pool Pattern, a set number of connections are pre-
initialized and stored in the pool. When a connection is needed, it's simply retrieved from the pool,
eliminating the need to establish a new connection every time. This drastically reduces the
resource initialization overhead.

📌 Resource Reusability: The original code lacked a mechanism to reuse existing connections.
Once a connection was used, it was discarded. This not only led to the aforementioned
initialization overhead but also to potential resource wastage.

With the Object Pool Pattern, after a connection is used, it's returned to the pool, making it
available for subsequent requests. This reusability ensures that the system doesn't waste
resources by continuously creating and discarding them.

📌 Resource Limiting: Without the Object Pool Pattern, there's no limit to the number of
connections that can be created. In scenarios with a high number of incoming requests, this could
lead to resource exhaustion, potentially crashing the system or degrading its performance.

152
In the refactored code, the pool has a max_size , which limits the number of active connections. If
all connections are in use, the system will wait for a connection to be released back to the pool.
This mechanism prevents resource exhaustion and ensures that the system remains stable under
high loads.

📌 Predictable Performance: In the original code, the performance could be unpredictable,


especially under varying loads. The time taken to establish new connections could lead to
inconsistent response times.

With the Object Pool Pattern, since connections are pre-initialized and reused, the system's
performance becomes more predictable. The time taken to retrieve a connection from the pool is
consistent, leading to more stable and predictable response times.

📌 Resource Cleanup and Maintenance: In the original code, since connections were discarded
after use, there was no mechanism to perform cleanup or maintenance on them.

In the refactored code, before a connection is returned to the pool, it can be reset to its initial
state, ensuring that any lingering data or states from previous operations don't interfere with
subsequent operations. This ensures the integrity and reliability of the connections in the pool.

In conclusion, the Object Pool Pattern in the refactored code addresses several critical issues
present in the original code. It optimizes resource usage, improves performance, and ensures the
system's stability and reliability, especially under high loads.

Real-life use-case code:


Let's consider a scenario where we have a web application that needs to communicate with a
database. Instead of creating a new database connection for every request, we'll use an Object
Pool Pattern to manage and reuse database connections.

import queue

class DatabaseConnection:
def __init__(self):
# Simulate a costly database connection setup
print("Setting up DB Connection...")

def query(self, sql_command):


# Simulate a query execution
print(f"Executing query: {sql_command}")

class DatabaseConnectionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(DatabaseConnection())

def get_connection(self):
connection = self._available.get()
self._in_use.add(connection)
return connection

def release_connection(self, connection):


153
self._in_use.remove(connection)
connection = None # Reset connection (for demonstration purposes)
self._available.put(DatabaseConnection())

# Usage:

pool = DatabaseConnectionPool(3)

# Simulate getting a connection from the pool


conn1 = pool.get_connection()
conn1.query("SELECT * FROM users")

# Simulate returning the connection to the pool


pool.release_connection(conn1)

# Simulate getting another connection from the pool


conn2 = pool.get_connection()
conn2.query("SELECT * FROM orders")
pool.release_connection(conn2)

📌 What this code does: - We have a DatabaseConnection class that simulates setting up a
connection and executing a query.

The DatabaseConnectionPool class manages a pool of DatabaseConnection objects. It uses


a queue to keep track of available connections and a set to keep track of connections
currently in use.

When a connection is requested via get_connection , it's taken from the queue and added
to the set of in-use connections.

After the connection is done being used, it's returned to the pool via release_connection ,
where it's removed from the in-use set, reset (for demonstration purposes), and a new
connection is added to the available queue.

In the usage example, we create a pool of 3 connections. We then simulate getting a


connection, executing a query, and returning the connection to the pool.

📌 The primary advantage of this approach is that we don't have to bear the cost of setting up a
new database connection every time we need one. Instead, we reuse the connections from the
pool, which can lead to significant performance improvements in real-world scenarios where
database operations are frequent.

In above code, implementing Object pool Design Pattern for DatabaseConnectionPool - Where is
the concept of Pool coming here?

In the provided code, the "pool" of objects (in this case, DatabaseConnection objects) is
represented by the combination of the self._available queue and the self._in_use set
within the DatabaseConnectionPool class. Together, these two data structures manage the entire
pool of database connections.

Here's a breakdown:

154
1. The Queue ( self._available ): This queue holds the DatabaseConnection objects that are
currently not in use and are available to be borrowed. When the DatabaseConnectionPool is
initialized with a given size (e.g., 3), it pre-creates that many DatabaseConnection objects
and places them into this queue.

2. The Set ( self._in_use ): This set holds the DatabaseConnection objects that have been
borrowed from the pool and are currently being used. When a connection is "gotten" from
the pool using the get_connection method, it's removed from the self._available queue
and added to this self._in_use set.

The fundamental principle of the Object Pool Design Pattern is indeed being followed here:

Borrowing an Object: When you want a database connection, you call get_connection .
This method takes a DatabaseConnection object from the self._available queue (if one
is available) and adds it to the self._in_use set, indicating that the connection is now in
use.

Returning an Object: After you're done using a database connection, you call
release_connection to return it to the pool. The method removes the connection from the
self._in_use set, resets it (in this case, by setting it to None for demonstration purposes),
and then creates a new DatabaseConnection object to add back to the self._available
queue.

However, there's a slight inefficiency in the release_connection method: instead of reusing the
existing connection, it's discarding it and creating a new one. A more efficient approach would be
to reset the state of the existing connection (if necessary) and then simply return it to the
self._available queue without creating a new one.

So, let's modify the release_connection method to reuse


the existing connection instead of discarding it and creating
a new one.
Here's the improved code:

import queue

class DatabaseConnection:
def __init__(self):
# Simulate a costly database connection setup
print("Setting up DB Connection...")

def query(self, sql_command):


# Simulate a query execution
print(f"Executing query: {sql_command}")

def reset(self):
# If there are any states or configurations to reset in the connection,
do it here.
pass

class DatabaseConnectionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
155
self._in_use = set()
for _ in range(size):
self._available.put(DatabaseConnection())

def get_connection(self):
connection = self._available.get()
self._in_use.add(connection)
return connection

def release_connection(self, connection):


self._in_use.remove(connection)
connection.reset() # Reset the state of the connection
self._available.put(connection) # Return the existing connection to the
available queue

# Usage:

pool = DatabaseConnectionPool(3)

# Simulate getting a connection from the pool


conn1 = pool.get_connection()
conn1.query("SELECT * FROM users")

# Simulate returning the connection to the pool


pool.release_connection(conn1)

# Simulate getting another connection from the pool


conn2 = pool.get_connection()
conn2.query("SELECT * FROM orders")
pool.release_connection(conn2)

In this improved version: - We added a reset method to the DatabaseConnection class. This
method can be used to reset any state or configurations of the connection if needed. - In the
release_connection method of the DatabaseConnectionPool class, we now call the reset
method on the connection to ensure it's in a clean state. Then, we directly return the existing
connection to the self._available queue without creating a new one. This makes the object
pool more efficient by reusing the existing objects.

In the context of the Object Pool Pattern, when an object is returned to the pool after use, it's
crucial to ensure that any state or configurations associated with that object are reset to their
default or initial values. This ensures that the next time the object is borrowed from the pool, it
doesn't carry over any residual state from its previous use.

For the DatabaseConnection class, the reset method is a placeholder to demonstrate this
concept. In a real-world scenario, a database connection might have various states or
configurations, such as:

1. Transactions: If a transaction was started but not committed or rolled back, it would need to
be terminated to ensure the connection doesn't remain in a transactional state.

2. Session Variables: Some databases allow setting session-specific variables. These would
need to be reset to their defaults.

3. Cursors: If the connection opened any cursors, they might need to be closed.

4. Timeouts or Locks: Any locks or timeouts set by the connection should be cleared.
156
The reset method would handle all these tasks. In our simplified example, the method is empty
because we're not simulating these complexities. However, in a real-world application, the reset
method would contain the necessary logic to bring the connection back to a clean, default state.

Here's a hypothetical example of what the reset method might look like for a more complex
DatabaseConnection :

def reset(self):
# End any open transactions
if self.transaction_open:
self.rollback_transaction()

# Reset session variables to defaults


self.set_session_variables(defaults=True)

# Close any open cursors


self.close_all_cursors()

# Clear any locks or timeouts


self.clear_locks()

This is just a conceptual representation. The actual implementation would depend on the specifics
of the database and the library being used.

Example 2 - Real life use case of Object pool Design Pattern in


Python
Let's consider a rendering engine for a game or simulation. Rendering objects (like characters,
vehicles, or other entities) can be resource-intensive. Instead of creating and destroying these
objects, we can use an Object Pool Pattern to manage and reuse them.

Object Pool for Game Rendering:

import queue

class RenderedObject:
def __init__(self, object_type):
self.object_type = object_type
self.position = (0, 0)
print(f"Rendering a new {self.object_type}...")

def set_position(self, x, y):


self.position = (x, y)

def render(self):
print(f"Rendering {self.object_type} at position {self.position}")

class RenderedObjectPool:
def __init__(self, object_type, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
157
self._available.put(RenderedObject(object_type))

def get_object(self):
rendered_object = self._available.get()
self._in_use.add(rendered_object)
return rendered_object

def release_object(self, rendered_object):


rendered_object.set_position(0, 0) # Reset to default position
self._in_use.remove(rendered_object)
self._available.put(rendered_object)

# Usage:

# Create pools for different game entities


character_pool = RenderedObjectPool("Character", 5)
vehicle_pool = RenderedObjectPool("Vehicle", 3)

# Simulate game rendering


character1 = character_pool.get_object()
character1.set_position(10, 20)
character1.render()

vehicle1 = vehicle_pool.get_object()
vehicle1.set_position(50, 60)
vehicle1.render()

# After rendering, release objects back to the pool


character_pool.release_object(character1)
vehicle_pool.release_object(vehicle1)

📌 What this code does: - We have a RenderedObject class that represents an entity in our game
or simulation. It can be positioned and rendered. - The RenderedObjectPool class manages a
pool of RenderedObject objects. It uses a queue to keep track of available objects and a set to
keep track of objects currently in use. - When an object is requested via get_object , it's taken
from the queue and added to the set of in-use objects. - After the object is done being used (e.g.,
after it's rendered), it's returned to the pool via release_object . Here, we reset its position to a
default value, remove it from the in-use set, and add it back to the available queue. - In the usage
example, we create pools for characters and vehicles. We then simulate getting a character and a
vehicle, positioning them, rendering them, and then returning them to their respective pools.

📌 This approach is beneficial in gaming or simulation scenarios where there are frequent render
operations. By reusing the rendered objects, we can avoid the overhead of creating and
destroying them repeatedly, leading to smoother rendering and better performance.

Example 3 - Real life use case of Object pool Design Pattern in


Python : web scraping system
Let's delve into a scenario involving a web scraping system. Web scraping often requires making
multiple HTTP requests to fetch data. Establishing a new connection for each request can be
resource-intensive and slow. Using an Object Pool Pattern for managing HTTP sessions can
optimize this process.
158
Object Pool for Web Scraping Sessions:

import queue
import requests

class WebSession:
def __init__(self):
self.session = requests.Session()
print("Establishing a new web session...")

def fetch(self, url):


return self.session.get(url).text

class WebSessionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(WebSession())

def get_session(self):
web_session = self._available.get()
self._in_use.add(web_session)
return web_session

def release_session(self, web_session):


# In this case, we don't reset anything but just release the session
self._in_use.remove(web_session)
self._available.put(web_session)

# Usage:

pool = WebSessionPool(3)

# Simulate fetching data from multiple URLs


session1 = pool.get_session()
data1 = session1.fetch("https://example.com/page1")
print("Fetched data from page1")

session2 = pool.get_session()
data2 = session2.fetch("https://example.com/page2")
print("Fetched data from page2")

# Return sessions to the pool


pool.release_session(session1)
pool.release_session(session2)

📌 What this code does: - We have a WebSession class that wraps around the
requests.Session() . This allows us to maintain a persistent connection to a website and reuse
the connection for multiple requests. - The WebSessionPool class manages a pool of WebSession
objects. It uses a queue to keep track of available sessions and a set to keep track of sessions
currently in use. - When a session is requested via get_session , it's taken from the queue and
added to the set of in-use sessions. - After the session is done being used (e.g., after fetching
data), it's returned to the pool via release_session . In this case, we don't need to reset any state;
159
we simply release the session. - In the usage example, we create a pool of 3 sessions. We then
simulate fetching data from two different URLs using two different sessions from the pool. After
fetching, we return the sessions to the pool.

📌 This approach is beneficial in web scraping scenarios where there are frequent fetch
operations. By reusing the web sessions, we can avoid the overhead of establishing a new
connection for each request, leading to faster data retrieval and reduced resource usage.

Example 4 - Real life use case of Object pool Design Pattern in


Python : video processing system
Let's explore a scenario involving a video processing system. Video processing can be resource-
intensive, especially when dealing with high-resolution videos. Using an Object Pool Pattern for
managing video processors can help in optimizing the processing tasks.

Object Pool for Video Processors:

import queue
import time

class VideoProcessor:
def __init__(self):
print("Initializing a new video processor...")

def process(self, video_data, operation):


print(f"Processing video with operation: {operation}")
time.sleep(2) # Simulating a time-consuming process
return f"Processed {video_data} with {operation}"

class VideoProcessorPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(VideoProcessor())

def get_processor(self):
processor = self._available.get()
self._in_use.add(processor)
return processor

def release_processor(self, processor):


self._in_use.remove(processor)
self._available.put(processor)

# Usage:

pool = VideoProcessorPool(3)

# Simulate processing multiple videos


processor1 = pool.get_processor()
result1 = processor1.process("video1.mp4", "filter: grayscale")
print(result1)
160
processor2 = pool.get_processor()
result2 = processor2.process("video2.mp4", "filter: sepia")
print(result2)

# Return processors to the pool


pool.release_processor(processor1)
pool.release_processor(processor2)

📌 What this code does: - We have a VideoProcessor class that simulates the processing of a
video. It takes in video data and an operation (like applying a filter) and returns the processed
result. - The VideoProcessorPool class manages a pool of VideoProcessor objects. It uses a
queue to keep track of available processors and a set to keep track of processors currently in use.
- When a processor is requested via get_processor , it's taken from the queue and added to the
set of in-use processors. - After the processor is done being used (e.g., after processing a video),
it's returned to the pool via release_processor . - In the usage example, we create a pool of 3
processors. We then simulate processing two different videos using two different processors from
the pool. After processing, we return the processors to the pool.

📌 This approach is beneficial in video processing scenarios where there are frequent processing
tasks. By reusing the video processors, we can avoid the overhead of initializing a new processor
for each task, leading to faster processing times and optimized resource usage.

Let's understand a basic concept here - In above code,


implementing Object pool Design Pattern for video
processing system - Where is the Pool?
Earlier I said, " The VideoProcessorPool class manages a pool of VideoProcessor objects. It
uses a queue to keep track of available processors and a set to keep track of processors currently
in use."

And also I mentioned earlier, that for Object Pool Design pattern the fundamental principle is that
- In this pattern, instead of being able to use only one single object, you can use an object from a
pool of objects.

Let's break it down.

The "pool" in the Object Pool Design Pattern refers to a collection of pre-instantiated objects that
are ready to be used. The idea is to have these objects available so that they can be quickly
borrowed and returned, avoiding the overhead of creating and destroying them repeatedly.

In the VideoProcessorPool example, the pool is represented by two primary data structures: the
queue ( self._available ) and the set ( self._in_use ).

1. The Queue ( self._available ): This is a collection (specifically a queue.Queue ) of


VideoProcessor objects that are not currently in use and are available to be borrowed.
When the pool is initialized, a specified number of VideoProcessor objects are created and
added to this queue. When you want to use a processor, you "get" one from this queue. If the
queue is empty, it means all processors are currently in use.

161
2. The Set ( self._in_use ): This is a collection (specifically a Python set ) of VideoProcessor
objects that are currently being used. When you "get" a processor from the available queue,
it's added to this set to indicate that it's in use. When you're done with the processor and
"release" it back to the pool, it's removed from this set and added back to the available
queue.

Together, these two collections form the "pool" of objects in the Object Pool Design Pattern. The
queue represents the part of the pool with objects ready to be used, and the set represents the
part of the pool with objects currently in use.

In the context of the Object Pool Design Pattern: - Borrowing an Object: This is done by getting
an object from the self._available queue and adding it to the self._in_use set. - Returning
an Object: This is done by removing the object from the self._in_use set and putting it back
into the self._available queue.

The fundamental principle you mentioned is indeed followed in this design. The pool consists of
multiple VideoProcessor objects, and you can borrow and return any of these objects to and
from the pool.

🐍🚀 Observer Design Pattern in Python 🐍🚀

162
📌 The observer pattern is a behavioral design pattern that establishes a one-to-many
dependency between objects. When one object (the subject) changes state, all its dependents
(observers) are notified and updated automatically. This pattern is particularly useful when you
want to decouple the core functionality of your code from the parts that react to changes.

A similar implementation of this design pattern is seen in generating feeds on your social
platforms - the Pub/Sub (Publisher/Subscriber) Model/Pattern. When a content publisher
publishes their posts, the subscribers get notified of the content.

The following are the major differences between the


Observer Pattern and the Pub/Sub Pattern:
Observers and Subjects are tightly coupled. The subjects must keep track of their observers.
Whereas in the Pub/Sub pattern, they are loosely coupled with a message queue in between
observers and subjects.

The events are passed in a synchronous manner from the Subjects to the Observers. But in
Pub/Sub patterns, the events are passed asynchronously.

In the Observer pattern, both the Subjects and Observers reside on the same application
locality whereas they can reside on different localities in the Pub/Sub pattern.

A typical place to use the observer pattern is between your application and presentation layers.
Your application is the manager of the data and is the single source of truth, and when the data
changes, it can update all of the subscribers, that could be part of multiple presentation layers. For
example, the score was changed in a televised cricket game, so all the web browser clients, mobile
phone applications, leaderboard display on the ground and television graphics overlay, can all
now have the updated information synchronized.

163
📌 Use Cases: - GUI elements: When a button is clicked (subject), several actions might need to be
triggered in the application (observers). - Stock market: When the price of a stock changes
(subject), multiple investors or tools might need to be informed (observers). - Sensor systems:
When a sensor detects a change (subject), multiple systems or alarms might need to be triggered
(observers).

📌 The Python interpreter itself doesn't specifically implement the observer pattern, but it
provides all the necessary tools to do so. The dynamic nature of Python, with its first-class
functions and ability to add and remove attributes from objects at runtime, makes it particularly
well-suited for implementing patterns like this.

Let's see an example WITH and then WITHOUT the "Observer


Design Pattern in Python"

Initial Code Without Observer Design Pattern


Consider a simple scenario where we have a blog platform. Users can post articles, and readers
can follow these users to get updates. Without the Observer pattern, this might look something
like:

class Blog:
def __init__(self):
self.articles = []
self.followers = []

def post_article(self, article):


self.articles.append(article)
for follower in self.followers:
follower.update(article)

def add_follower(self, follower):


self.followers.append(follower)

class Follower:
def __init__(self, name):
self.name = name

def update(self, article):


print(f"{self.name} received update about new article: {article}")

📌 The above code directly couples the Blog and Follower classes. If we want to change the
way followers are notified, we'd have to modify the Blog class.

📌 If we want to introduce different types of followers (e.g., EmailSubscriber, SMSNotifier), we'd


have to modify the Blog class to accommodate each type.

📌 This approach violates the Single Responsibility Principle. The Blog class is responsible for
both managing articles and notifying followers.

164
Refactored Code With Observer Design Pattern
To implement the Observer pattern, we'll introduce a Subject interface for the Blog and an
Observer interface for the Follower . This decouples the classes and allows for more flexibility:

from abc import ABC, abstractmethod

# Define the Subject interface


class Subject(ABC):
@abstractmethod
def attach(self, observer):
pass

@abstractmethod
def detach(self, observer):
pass

@abstractmethod
def notify(self):
pass

# Define the Observer interface


class Observer(ABC):
@abstractmethod
def update(self, message):
pass

class Blog(Subject):
def __init__(self):
self.articles = []
self.observers = []

def post_article(self, article):


self.articles.append(article)
self.notify()

def attach(self, observer):


self.observers.append(observer)

def detach(self, observer):


self.observers.remove(observer)

def notify(self):
for observer in self.observers:
observer.update(self.articles[-1])

class Follower(Observer):
def __init__(self, name):
self.name = name

def update(self, article):


print(f"{self.name} received update about new article: {article}")

165
📌 We've introduced the Subject and Observer interfaces. This allows us to easily add new
types of subjects or observers without modifying existing code.

📌 The Blog class now only notifies observers. It doesn't need to know how they handle the
updates. This decouples the classes and adheres to the Single Responsibility Principle.

📌 We can now introduce different types of observers (e.g., EmailSubscriber, SMSNotifier) by


simply implementing the Observer interface. The Blog class doesn't need to change.

In conclusion, by implementing the Observer Design


Pattern, we've decoupled the Blog and Follower
classes, making our code more modular,
maintainable, and extensible.
Let's delve deeper into how the refactored code with the
Observer Design Pattern addresses the issues of the original
code.
📌 Decoupling of Blog and Follower Classes

Original Issue: In the initial code, the Blog class was directly responsible for updating the
followers. This means that any change in the way followers are notified or any addition of new
types of followers would require modifications to the Blog class.

Solution with Observer Pattern: By introducing the Subject and Observer interfaces, we've
separated the responsibilities. The Blog class (which implements the Subject interface) is only
responsible for maintaining a list of observers and notifying them. It doesn't need to know how
these observers handle the updates. The Follower class (which implements the Observer
interface) is solely responsible for how to react when notified.

📌 Flexibility in Notification Mechanisms


Original Issue: The initial code had a fixed way of notifying followers. If we wanted to introduce
different notification mechanisms (e.g., email, SMS), we would need to make changes to the Blog
class.

Solution with Observer Pattern: With the Observer pattern in place, we can easily introduce new
types of observers (like EmailSubscriber or SMSNotifier ). Each of these new observers would
implement the Observer interface and define their own update method. The Blog class
remains unchanged, as it simply calls the update method on all attached observers.

📌 Adherence to the Single Responsibility Principle


Original Issue: The initial Blog class was doing too much. It was managing articles and was also
directly responsible for notifying followers. This is a violation of the Single Responsibility Principle,
which states that a class should have only one reason to change.

Solution with Observer Pattern: The refactored code ensures that the Blog class is only responsible
for managing articles and maintaining a list of observers. The responsibility of reacting to new
articles is now with the Observer (e.g., Follower ). This separation of concerns means that
changes to how articles are managed won't affect the notification mechanism and vice versa.

📌 Ease of Extensibility
166
Original Issue: In the initial setup, adding new features or types of followers would require changes
to the core Blog class, making the system less maintainable and more prone to errors.

Solution with Observer Pattern: With the decoupling achieved through the Observer pattern, adding
new features becomes easier. For instance, if we want to introduce a feature where followers can
choose to be notified about specific categories of articles, we can do so by modifying only the
Observer classes without touching the Blog class.

In summary, the Observer Design Pattern in the refactored code provides a robust solution to the
issues present in the original code by ensuring decoupling, flexibility, adherence to design
principles, and ease of extensibility.

📌 Example-1 Real-life use-case code:


Let's consider a weather station that measures temperature. Multiple display elements, like a
current conditions display, a statistics display, and a forecast display, need to update whenever
the weather station gets new measurements.

# the subject
class WeatherStation:
def __init__(self):
self._observers = []
self._temperature = 0

def register_observer(self, observer):


self._observers.append(observer)

def remove_observer(self, observer):


self._observers.remove(observer)

def notify_observers(self):
for observer in self._observers:
observer.update(self._temperature)

def set_temperature(self, temperature):


self._temperature = temperature
self.notify_observers()

# represent observers
class CurrentConditionsDisplay:
def update(self, temperature):
print(f"Current conditions: {temperature} degrees Celsius")

# represent observers
class StatisticsDisplay:
def __init__(self):
self._max_temp = float('-inf')
self._min_temp = float('inf')

def update(self, temperature):


self._max_temp = max(self._max_temp, temperature)
self._min_temp = min(self._min_temp, temperature)
print(f"Min/Max temperatures: {self._min_temp}/{self._max_temp}")
167
# represent observers
class ForecastDisplay:
def update(self, temperature):
# Just a dummy forecast based on current temperature
forecast = "sunny" if temperature > 20 else "rainy"
print(f"Forecast: {forecast}")

Usage:

station = WeatherStation()

current_display = CurrentConditionsDisplay()
statistics_display = StatisticsDisplay()
forecast_display = ForecastDisplay()

station.register_observer(current_display)
station.register_observer(statistics_display)
station.register_observer(forecast_display)

station.set_temperature(25)
station.set_temperature(18)

📌 Explanation of the code:


We have a WeatherStation class that represents the subject. It maintains a list of observers
and provides methods to add, remove, and notify them.

The CurrentConditionsDisplay , StatisticsDisplay , and ForecastDisplay classes


represent observers. They all have an update method, which gets called when the
WeatherStation changes its temperature.

In the usage example, we create a weather station and three display elements. We register
the displays as observers to the weather station. When we set a new temperature on the
weather station, all registered displays get updated automatically.

📌 The observer pattern, as shown, allows you to add new types of display elements in the future
without modifying the WeatherStation class. This decoupling is the core advantage of the
pattern. If you wanted to add a new type of display, you'd simply create a new class that
implements the update method and register an instance of it with the weather station.

Let's see how the above code example adheres to the


principles and requirements of the observer pattern desing
in Python
📌 One-to-Many Dependency: The observer pattern establishes a one-to-many dependency. In
the provided code, the WeatherStation (the subject) has a one-to-many relationship with its
observers ( CurrentConditionsDisplay , StatisticsDisplay , ForecastDisplay ). This
relationship is evident in the _observers list maintained by the WeatherStation .

# the subject

168
class WeatherStation:
def __init__(self):
self._observers = []
self._temperature = 0

def register_observer(self, observer):


self._observers.append(observer)

def remove_observer(self, observer):


self._observers.remove(observer)

def notify_observers(self):
for observer in self._observers:
observer.update(self._temperature)

def set_temperature(self, temperature):


self._temperature = temperature
self.notify_observers()

📌 Decoupling the Subject and Observers: The observer pattern emphasizes decoupling. In the
code, the WeatherStation doesn't need to know the specifics of what each observer does. It only
knows that they have an update method. This is a clear separation of concerns. The observers
can change their internal implementation without affecting the WeatherStation , and vice versa.

📌 Ability to Add/Remove Observers at Runtime: The code provides methods


register_observer and remove_observer in the WeatherStation class. These methods allow
observers to be added or removed from the subject at runtime, which is a core feature of the
observer pattern. For instance, if at some point we no longer wanted the ForecastDisplay to
receive updates, we could simply call station.remove_observer(forecast_display) .

📌 Notification of State Changes: When the state of the subject changes (in this case, when the
temperature of the WeatherStation is set), all its observers are notified. This is achieved through
the notify_observers method in the WeatherStation class, which is called inside the
set_temperature method. Each observer's update method is then called with the new
temperature.

📌 Observers Define Their Reactions: Each observer decides how to react when notified of a
change. This is evident in the different implementations of the update method. The
CurrentConditionsDisplay simply prints the current temperature, the StatisticsDisplay
calculates and displays min/max temperatures, and the ForecastDisplay provides a rudimentary
forecast based on the temperature. This flexibility is a hallmark of the observer pattern, allowing
each observer to define its behavior upon receiving an update.

class CurrentConditionsDisplay:
def update(self, temperature):
print(f"Current conditions: {temperature} degrees Celsius")

class StatisticsDisplay:
def __init__(self):
self._max_temp = float('-inf')
self._min_temp = float('inf')

def update(self, temperature):


169
self._max_temp = max(self._max_temp, temperature)
self._min_temp = min(self._min_temp, temperature)
print(f"Min/Max temperatures: {self._min_temp}/{self._max_temp}")

class ForecastDisplay:
def update(self, temperature):
# Just a dummy forecast based on current temperature
forecast = "sunny" if temperature > 20 else "rainy"
print(f"Forecast: {forecast}")

📌 Consistent Interface for Observers: All observers implement a consistent interface, which in
this case is the update method. This ensures that the WeatherStation can notify any observer
without knowing its specific type or implementation details. This is why we can easily add new
types of displays or observers in the future, as long as they implement the update method.

In summary, the provided code adheres to the principles and requirements of the observer
pattern by establishing a one-to-many dependency, decoupling the subject from its observers,
allowing dynamic addition/removal of observers, notifying observers of state changes, letting
observers define their reactions, and maintaining a consistent interface for all observers.

Example 2 - Real life use case of Observer Pattern in Python


Let's delve into another example: a real-time auction system.

In a real-time auction system, when a bid is placed on an item, multiple entities might be
interested:

1. Bidders: They want to know if they've been outbid.

2. Auctioneer: Wants to keep track of the highest bid.

3. Display Boards: To show the current highest bid to the audience.

Here's how we can model this using the observer pattern:

class Auction:
def __init__(self, item_name):
self._observers = []
self._highest_bid = 0
self._highest_bidder = None
self._item_name = item_name

def register_observer(self, observer):


self._observers.append(observer)

def remove_observer(self, observer):


self._observers.remove(observer)

def notify_observers(self):
for observer in self._observers:
observer.update(self._highest_bid, self._highest_bidder,
self._item_name)

def place_bid(self, bid_amount, bidder_name):


if bid_amount > self._highest_bid:

170
self._highest_bid = bid_amount
self._highest_bidder = bidder_name
self.notify_observers()

class Bidder:
def __init__(self, name):
self._name = name

def update(self, highest_bid, highest_bidder, item_name):


if highest_bidder != self._name:
print(f"{self._name}, the highest bid for {item_name} is now
{highest_bid} by {highest_bidder}. Time to place a new bid!")
else:
print(f"{self._name}, you're still the highest bidder for {item_name}
with a bid of {highest_bid}!")

class Auctioneer:
def update(self, highest_bid, highest_bidder, item_name):
print(f"New highest bid for {item_name}! It's {highest_bid} by
{highest_bidder}.")

class DisplayBoard:
def update(self, highest_bid, highest_bidder, item_name):
print(f"--- Display Board ---\nCurrent highest bid for {item_name}:
{highest_bid}\nBidder: {highest_bidder}\n----------------------")

Usage:

auction = Auction("Rare Painting")

alice = Bidder("Alice")
bob = Bidder("Bob")
charlie = Bidder("Charlie")

auctioneer = Auctioneer()
display_board = DisplayBoard()

auction.register_observer(alice)
auction.register_observer(bob)
auction.register_observer(charlie)
auction.register_observer(auctioneer)
auction.register_observer(display_board)

auction.place_bid(1000, "Alice")
auction.place_bid(1200, "Bob")
auction.place_bid(1100, "Charlie") # This won't update observers since it's not
the highest bid
auction.place_bid(1300, "Alice")

📌 Explanation:
The Auction class represents the subject. Whenever a new highest bid is placed, it notifies
all observers.

171
Bidder , Auctioneer , and DisplayBoard are observers. They each have their own update
method to react to changes in the auction.

In the usage example, we create an auction for a "Rare Painting". We then register three
bidders, an auctioneer, and a display board as observers. As bids are placed, the observers
are notified and react accordingly. If a bid isn't the highest, the observers aren't notified.

Let's see how above code, how the above code example you
gave adheres to the principles and requirements of the
observer pattern desing in Python
📌 One-to-Many Dependency: The Auction class (the subject) has a one-to-many relationship
with its observers ( Bidder , Auctioneer , DisplayBoard ). This relationship is maintained in the
_observers list within the Auction class. When a new bid is placed, multiple entities (observers)
are informed about the change.

class Auction:
def __init__(self, item_name):
self._observers = []
self._highest_bid = 0
self._highest_bidder = None
self._item_name = item_name

def register_observer(self, observer):


self._observers.append(observer)

def remove_observer(self, observer):


self._observers.remove(observer)

def notify_observers(self):
for observer in self._observers:
observer.update(self._highest_bid, self._highest_bidder,
self._item_name)

def place_bid(self, bid_amount, bidder_name):


if bid_amount > self._highest_bid:
self._highest_bid = bid_amount
self._highest_bidder = bidder_name
self.notify_observers()

📌 Decoupling the Subject and Observers: The Auction class is decoupled from its observers.
It doesn't need to know the specifics of each observer's behavior. It only knows that they have an
update method. This separation ensures that the Auction class can function independently of
the specific observers attached to it. For instance, the Auction class doesn't need to know how
the Bidder class decides to inform its user, or how the DisplayBoard presents the information.

📌 Ability to Add/Remove Observers at Runtime: The Auction class provides


register_observer and remove_observer methods. This allows for dynamic management of its
observers. For instance, if a bidder decides to leave the auction, they can be easily removed from
the list of observers, ensuring they won't receive further updates.

172
📌 Notification of State Changes: When a new highest bid is placed on an item in the Auction ,
all its observers are notified. This is done through the notify_observers method, which is called
within the place_bid method. Each observer's update method is then invoked with the details of
the highest bid.

📌 Observers Define Their Reactions: Each observer has its own unique reaction to the state
change in the Auction . For example: - The Bidder checks if they are still the highest bidder and
informs them accordingly. - The Auctioneer announces the new highest bid. - The DisplayBoard
updates its display to show the current highest bid and bidder.

class Bidder:
def __init__(self, name):
self._name = name

def update(self, highest_bid, highest_bidder, item_name):


if highest_bidder != self._name:
print(f"{self._name}, the highest bid for {item_name} is now
{highest_bid} by {highest_bidder}. Time to place a new bid!")
else:
print(f"{self._name}, you're still the highest bidder for {item_name}
with a bid of {highest_bid}!")

class Auctioneer:
def update(self, highest_bid, highest_bidder, item_name):
print(f"New highest bid for {item_name}! It's {highest_bid} by
{highest_bidder}.")

class DisplayBoard:
def update(self, highest_bid, highest_bidder, item_name):
print(f"--- Display Board ---\nCurrent highest bid for {item_name}:
{highest_bid}\nBidder: {highest_bidder}\n----------------------")

This flexibility in reaction is a core feature of the observer pattern, allowing each observer to
define its behavior upon receiving an update.

📌 Consistent Interface for Observers: All observers implement a consistent interface, the
update method. This ensures that the Auction can notify any observer without knowing its
specific type. Whether it's a Bidder , an Auctioneer , or a DisplayBoard , the Auction class can
communicate with them using the same method.

In conclusion, the auction system code adheres to the principles and requirements of the
observer pattern by establishing a one-to-many dependency, decoupling the subject from its
observers, allowing for dynamic management of observers, notifying observers of state changes,
letting observers define their reactions, and maintaining a consistent interface for all observers.

173
🐍🚀 Adapter Design Pattern. 🐍🚀

A structural design pattern in Python proposes a way of composing objects to create new
functionality. One of these patterns I will cover in this section is adapter pattern.

The adapter pattern is one of THE MOST pervasive structural design pattern that helps us make
two incompatible interfaces compatible. What does that really mean? If we have an old
component and we want to use it in a new system, or a new component that we want to use in an
old system, the two can rarely communicate without requiring code changes. But changing the
code is not always possible, either because we don't have access to it or because it is impractical.

In such cases, we can write an extra layer that makes all the required modifications for enabling
communication between the two interfaces. This layer is called an adapter.

Alright, let's dive deep into the Adapter Pattern in Python.


📌 The Adapter Pattern is essentially a translator between two interfaces. Imagine you speak
English and you're trying to communicate with someone who speaks French. You'd need a
translator (or adapter) to facilitate the conversation. Similarly, in software, when two interfaces are
incompatible, an adapter can be used to make them work together.

📌 Use Cases: 1. Legacy Code Integration: When integrating legacy systems with newer systems,
the old code might not fit the new system's expectations. Instead of rewriting the legacy code, an
adapter can be used to bridge the gap. 2. Third-party Libraries: Sometimes, you might want to
integrate a third-party library into your application. If the library's interface doesn't match your
application's expectations, an adapter can help. 3. API Changes: If an API you rely on changes its
interface, instead of changing every place in your code where you use this API, you can write an
adapter to adapt the new API to the old one.

Let's see an example WITH and then WITHOUT the "Adapter


design pattern in Python"
📌 First, let's consider a scenario where we have a legacy system that uses a class called
OldPrinter which prints text in a simple format. Our new system, however, expects text to be
printed in a more advanced format with a header and footer.

174
Code without Adapter Pattern:

class OldPrinter:
def print_simple_text(self, text):
print(text)

class NewSystem:
def __init__(self, printer):
self.printer = printer

def print_advanced_text(self, text):


header = "==== HEADER ===="
footer = "==== FOOTER ===="
self.printer.print_simple_text(f"{header}\n{text}\n{footer}")

# Usage
printer = OldPrinter()
system = NewSystem(printer)
system.print_advanced_text("Hello, World!")

📌 The above code will throw an error because OldPrinter does not have a method called
print_simple_text that the NewSystem expects. This is the incompatibility issue we're facing.

Code with Adapter Pattern:


📌 To solve this, we'll introduce an adapter for the OldPrinter so that it can work with the
NewSystem without changing the original OldPrinter code.

class OldPrinter:
def print_text(self, text):
print(text)

class PrinterAdapter:
def __init__(self, old_printer):
self.old_printer = old_printer

def print_simple_text(self, text):


self.old_printer.print_text(text)

class NewSystem:
def __init__(self, printer):
self.printer = printer

def print_advanced_text(self, text):


header = "==== HEADER ===="
footer = "==== FOOTER ===="
self.printer.print_simple_text(f"{header}\n{text}\n{footer}")

# Usage
old_printer = OldPrinter()
adapter = PrinterAdapter(old_printer)
system = NewSystem(adapter)
system.print_advanced_text("Hello, World!")

175
📌 In the refactored code, we introduced the PrinterAdapter class. This class takes an instance
of OldPrinter and adapts its interface to match what NewSystem expects.

📌 The PrinterAdapter has a method print_simple_text which internally calls the


print_text method of OldPrinter . This way, we've successfully bridged the gap between the
old system and the new system without modifying the original OldPrinter class.

📌 By using the Adapter pattern, we've ensured that the two incompatible interfaces ( OldPrinter
and NewSystem ) can work together. This promotes code reusability and keeps the system
modular.

Let's break down the refactored code with the Adapter


design pattern and understand how it addresses the issues
of the original code.
📌 Issue in Original Code: The NewSystem class expects any printer object passed to it to have a
method named print_simple_text . However, our OldPrinter class has a method named
print_text . This mismatch in method names leads to an AttributeError when we try to use
OldPrinter with NewSystem .

📌 Adapter Pattern Solution: The Adapter pattern's primary goal is to bridge the gap between
two incompatible interfaces. In our scenario, the incompatibility is between the NewSystem and
the OldPrinter due to the method name mismatch.

📌 Introducing the PrinterAdapter: To solve this, we introduced a new class called


PrinterAdapter . This class acts as a middleman between the NewSystem and the OldPrinter . It
"adapts" the interface of OldPrinter to what NewSystem expects.

class PrinterAdapter:
def __init__(self, old_printer):
self.old_printer = old_printer

def print_simple_text(self, text):


self.old_printer.print_text(text)

📌 How PrinterAdapter Works: The PrinterAdapter takes an instance of OldPrinter as an


argument during initialization. It then provides a method named print_simple_text , which is
what NewSystem expects. Inside this method, it calls the print_text method of the OldPrinter .
This way, when NewSystem calls print_simple_text on the adapter object, the request is
forwarded to the print_text method of OldPrinter .

📌 Usage with NewSystem: When we want to use OldPrinter with NewSystem , we don't pass
the OldPrinter instance directly. Instead, we wrap it inside the PrinterAdapter and pass the
adapter instance to NewSystem .

old_printer = OldPrinter()
adapter = PrinterAdapter(old_printer)
system = NewSystem(adapter)

176
📌 Benefits: 1. Code Reusability: We didn't have to modify the original OldPrinter class. This
means if there are other systems or parts of the codebase that rely on OldPrinter , they remain
unaffected. 2. Modularity: By introducing the adapter, we've kept the concerns separated. The
OldPrinter remains focused on its printing logic, the NewSystem on its advanced printing, and
the PrinterAdapter on bridging the gap between the two. 3. Flexibility: In the future, if there are
more printers with different interfaces, we can simply create new adapters for them without
changing the existing system.

📌 Conclusion: The Adapter pattern allowed us to integrate the OldPrinter with the NewSystem
seamlessly. It provided a solution that is modular, reusable, and flexible, addressing the primary
issue of method name mismatch without altering the original components.

Let's consider a scenario where you have an old system that uses a Book class to display book
details. Now, you have a new system that uses an EBook class, but you want to use the old
system's display method without changing its code.

# Old System
class Book:
def __init__(self, title, author):
self.title = title
self.author = author

def display(self):
return f"Title: {self.title}, Author: {self.author}"

# New System
class EBook:
def __init__(self, title, author, format):
self.title = title
self.author = author
self.format = format

def get_details(self):
return self.title, self.author, self.format

# Adapter
class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook

def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"

# Usage
ebook = EBook("Digital Fortress", "Dan Brown", "PDF")
ebook_adapter = EBookAdapter(ebook)
print(ebook_adapter.display()) # Output: Title: Digital Fortress, Author: Dan
Brown

📌 Explanation of the Code:

177
1. The Book class is from the old system. It has a method display which returns the title and
author of the book.

2. The EBook class is from the new system. It has an additional attribute format and a method
get_details which returns the title, author, and format of the ebook.

3. The EBookAdapter class is our adapter. It takes an instance of EBook and provides a
display method similar to the Book class. Inside this method, it calls the get_details
method of EBook and formats the data to match the old system's expectations.

4. In the usage section, we create an instance of EBook , wrap it with EBookAdapter , and then
call the display method. This allows us to use the new EBook class with the old system's
display functionality.

📌 Under-the-hood:
When we talk about the Adapter Pattern, it's essential to understand that it's not about changing
the behavior of objects but about providing a consistent interface. The adapter acts as a wrapper
that translates method calls. In our example, the EBookAdapter doesn't change the behavior of
the EBook class. Instead, it provides a method ( display ) that the old system expects, and
internally, it calls the appropriate method ( get_details ) of the EBook class.

In terms of performance, the overhead introduced by the adapter is generally negligible. However,
the primary benefit is the maintainability and scalability of the codebase. By using adapters, you
can easily integrate new components or replace old ones without affecting the rest of the system.

📌 How do you know that EBookAdapter class takes an


instance of EBook?
Let's break it down:

In the EBookAdapter class definition:

class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook

The __init__ method of the EBookAdapter class accepts an argument named ebook . This
argument is expected to be an instance of the EBook class. We can infer this based on the
subsequent usage of the ebook object within the adapter:

def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"

Here, the display method of the EBookAdapter class calls the get_details method on the
ebook object. Since the get_details method is a method of the EBook class, it implies that the
ebook object is expected to be an instance of the EBook class.

📌 To make the relationship more explicit and to ensure type safety, one could have used
Python's type hints:

178
class EBookAdapter:
def __init__(self, ebook: EBook):
self.ebook = ebook

With this type hint, it's clear that the ebook parameter should be an instance of the EBook class.

Why exactly we neededself.ebook.get_details()


method inside class EBookAdapter
The self.ebook.get_details() method is used within the EBookAdapter class to bridge the gap
between the old system ( Book class) and the new system ( EBook class). Let's break down the
reasons step-by-step:

Let's take a look at the same above code again here

# Old System
class Book:
def __init__(self, title, author):
self.title = title
self.author = author

def display(self):
return f"Title: {self.title}, Author: {self.author}"

# New System
class EBook:
def __init__(self, title, author, format):
self.title = title
self.author = author
self.format = format

def get_details(self):
return self.title, self.author, self.format

# Adapter
class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook

def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"

# Usage
ebook = EBook("Digital Fortress", "Dan Brown", "PDF")
ebook_adapter = EBookAdapter(ebook)
print(ebook_adapter.display()) # Output: Title: Digital Fortress, Author: Dan
Brown

179
📌 Differing Interfaces: 1. The Book class (old system) has a method display that returns a
string representation of the book's title and author. 2. The EBook class (new system) does not
have a display method. Instead, it has a get_details method that returns a tuple containing
the title, author, and format of the ebook.

📌 And that's whole reason of having the Adapter Class: The primary role of the
EBookAdapter is to make the EBook class compatible with the old system's expectations. The old
system expects a display method that returns a string representation of the book's details.

📌 Using get_details : 1. To achieve this compatibility, the EBookAdapter class introduces its
own display method. 2. Inside this display method, it needs to fetch the title and author of the
EBook instance to format them in the desired string representation. 3. The get_details method
of the EBook class provides this information. By calling self.ebook.get_details() , the adapter
fetches the title, author, and format of the ebook. 4. The line title, author, _ =
self.ebook.get_details() unpacks the returned tuple. The underscore ( _ ) is a conventional
placeholder for values we don't need (in this case, the format).

📌 Summary: The self.ebook.get_details() method is essential for the EBookAdapter to


access the necessary details of the EBook instance. By using this method, the adapter can then
format and return the details in a manner consistent with the old system's expectations (i.e., using
the display method). Without the get_details method, the adapter would not have a
straightforward way to access the ebook's title and author, making it challenging to provide a
compatible interface.

📌 Example - 2. Real-life Use-case Code


Consider an example of a Club Class. It mainly needs to organize events.

class Club:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the club {self.name}'

def organize_event(self):
return 'brings in artist to perform'

Now, let's say you bring in two interesting classes: Musician and Dancer from an external library,
and you want these new classes to work seamlessly with the existing Club classes.

# Below 2 classes coming from an


# external file named external.py
class Musician:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the musician {self.name}'

def play(self):
return 'plays music'
180
class Dancer:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the dancer {self.name}'

def dance(self):
return 'does a dance performance'

But, here's the catch, you can not make a lot of changes in either your old Club Class or the new
Musician and Dancer classes from an external library.

The external Musician and Dancer classes have play() or dance() method

The client code, i.e. the Club Class, only has an organize_event() method. It has no idea about
play() or dance() (on the respective classes from the external library).

And this is where, you have Adapters to the rescue!

We create a generic Adapter class that allows us to adapt several objects with different interfaces
into one unified interface. The obj argument of the init() method is the object that we want to
adapt, and adapted_methods is a dictionary containing key/value pairs matching the method the
client calls and the method that should be called.

Below code with the COMPLETE SOLUTION

Take a good look and then I will go through each step of the below.

# Below 2 classes coming from an


# external file named external.py
class Musician:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the musician {self.name}'

def play(self):
return 'plays music'

class Dancer:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the dancer {self.name}'

def dance(self):
return 'does a dance performance'

################################################
# Below is my old Club Class in my main file
# The whole purpose of this code is to create
181
# and Adapter class so that this old Club class
# can interact with new classes coming from
# external.py file
################################################

# At the beginning this file is importing


# the external new classes (Musician & Dancer)
# from another file. Let's say that file name
# is external.py. And those Musician & Dancer
# Classes are defined in that external.py file
from external import Musician, Dancer

class Club:
def __init__(self, name):
self.name = name

def __str__(self):
return f'the club {self.name}'

def organize_event(self):
return 'brings in artist to perform'

class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)

def __str__(self):
return str(self.obj)

def main():

objects = [Club('Jazz Cafe'), Musician('Paul Rohan'), Dancer('Tom Hank')]

for obj in objects:


if hasattr(obj, 'play') or hasattr(obj, 'dance'):
if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)

# referencing the adapted object here


obj = Adapter(obj, adapted_methods)

print(f'{obj} {obj.organize_event()}')

if __name__ == "__main__":
main()

the club Jazz Cafe hires an artist to perform for the people
the musician Roy Ayers plays music
the dancer Shane Sparks does a dance performance
182
Alright, let's dissect the provided code and understand its intricacies.

📌 The code begins by defining two classes, Musician and Dancer , both of which are assumed
to be part of an external library. Each of these classes has its own unique method ( play for
Musician and dance for Dancer ) to represent the action they perform.

📌 The Club class represents a venue that can organize events. Its primary method is
organize_event , which signifies hiring an artist for a performance.

📌 The challenge here is that while the Club class uses the method organize_event to signify a
performance, the external classes ( Musician and Dancer ) use different methods ( play and
dance respectively). This discrepancy in method naming is where the need for an adapter arises.

📌 The Adapter class is designed to bridge this gap. It takes in an object and a dictionary of
adapted methods. The magic happens in this line: self.__dict__.update(adapted_methods) .
This line dynamically updates the instance dictionary with the adapted methods, essentially
allowing us to "rename" or "alias" methods.

📌 Under-the-hood: The __dict__ attribute of a Python object is a dictionary that contains the
object's instance variables and values. By updating this dictionary directly, we can dynamically
add, modify, or alias attributes and methods of the object.

📌 In the main function, a list of objects ( objects ) is created, containing an instance of the Club ,
Musician , and Dancer classes. The goal is to loop through each object and call the
organize_event method.

📌 In the main() function, for the Musician and Dancer objects, the adapted_methods
dictionary is created to map the organize_event method to the respective play or dance
method:

📌 As we iterate over each object in the list, the code checks if the object has either a play or
dance method. If it does, the code prepares a dictionary ( adapted_methods ) that maps the
organize_event method to either the play or dance method of the object.

📌 Once the mapping dictionary is prepared, the object is wrapped (or adapted) using the
Adapter class. This effectively gives the object an organize_event method that points to its
inherent play or dance method.

📌 Finally, for each object (whether it's the original Club object or the adapted Musician and
Dancer objects), the organize_event method is called, and the result is printed.

📌 The output of the code will be:

the club Jazz Cafe brings in artist to perform


the musician Paul Rohan plays music
the dancer Tom Hank does a dance performance

📌 In essence, the Adapter pattern here allows the client code to interact with the Musician and
Dancer classes using the same interface ( organize_event ) as it does with the Club class, even
though the external classes have different method names. This ensures a consistent client
interface regardless of the underlying class implementations.

183
Let's delve deeper into the Adapter class and its utilization within the main() method to
understand how it's implementing the Adapter pattern.

First take a relook at the implementation again.

In the main() function, for the Musician and Dancer objects, the adapted_methods dictionary
is created to map the organize_event method to the respective play or dance method:

class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)

def __str__(self):
return str(self.obj)

def main():

objects = [Club('Jazz Cafe'), Musician('Paul Rohan'), Dancer('Tom Hank')]

for obj in objects:


if hasattr(obj, 'play') or hasattr(obj, 'dance'):
if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)

# referencing the adapted object here


obj = Adapter(obj, adapted_methods)

print(f'{obj} {obj.organize_event()}')

📌 Adapter Class Overview: The Adapter class is designed to take an object and a dictionary of
methods that need to be adapted. The primary goal of this class is to allow the object to be used
in a different context than it was originally intended for, without modifying the object's original
code.

class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)

def __str__(self):
return str(self.obj)

📌 Key Points: 1. The Adapter class constructor ( __init__ ) accepts two parameters: obj (the
object to be adapted) and adapted_methods (a dictionary of methods to adapt). 2. The line
self.__dict__.update(adapted_methods) is crucial. It dynamically updates the instance's
dictionary with the adapted methods. This allows the Adapter to "alias" or "rename" methods of
the obj .

184
📌 Under-the-hood: The __dict__ attribute is a dictionary representation of an object's
namespace. By updating this dictionary, you can dynamically add or modify attributes and
methods of the object.

📌 Adapter in Action within main(): Let's break down the main() method step-by-step to see
how the Adapter pattern is applied:

objects = [Club('Jazz Cafe'), Musician('Roy Ayers'), Dancer('Shane Sparks')]

Here, we create a list of objects from different classes.

📌 As we loop through each object:

for obj in objects:

1. We first check if the object has a play or dance method:

if hasattr(obj, 'play') or hasattr(obj, 'dance'):

2. Depending on which method the object has ( play for Musician or dance for Dancer ), we
create a dictionary ( adapted_methods ) that maps the organize_event method to the
respective method of the object:

if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)

3. Now, we use the Adapter class to wrap the object, effectively adapting it:

obj = Adapter(obj, adapted_methods)

At this point, the obj (whether it's a Musician or Dancer ) has been adapted to have an
organize_event method. This method will internally call the object's play or dance method,
respectively.

4. Finally, we call the organize_event method on the object (whether it's the original Club or
the adapted Musician and Dancer ):

print(f'{obj} {obj.organize_event()}')

📌 Summary: The Adapter pattern's essence here is to allow the client code (in this case, the
main() method) to interact with the Musician and Dancer classes using the same method
name ( organize_event ) as it does with the Club class. This is achieved without modifying the
original classes but by wrapping them with the Adapter class that provides the desired interface.

185
How exactly, self.__dict__.update(adapted_methods)
line dynamically updates the instance dictionary with the
adapted methods, essentially allowing us to "rename" or
"alias" methods.?
And how it facilitates the Adapter pattern in here.

First, take a re-look at the implementation again.

class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)

def __str__(self):
return str(self.obj)

def main():

objects = [Club('Jazz Cafe'), Musician('Paul Rohan'), Dancer('Tom Hank')]

for obj in objects:


if hasattr(obj, 'play') or hasattr(obj, 'dance'):
if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)

# referencing the adapted object here


obj = Adapter(obj, adapted_methods)

print(f'{obj} {obj.organize_event()}')

📌 Understanding __dict__ : Every instance of a Python class has a __dict__ attribute, which
is a dictionary containing the instance's attributes and their values. This dictionary is mutable,
which means you can add, modify, or delete attributes dynamically.

For example:

class Sample:
def __init__(self, x):
self.x = x

s = Sample(10)
print(s.__dict__) # Output: {'x': 10}

📌 Using update with __dict__ : The update method of dictionaries allows you to merge one
dictionary into another. When you use update on self.__dict__ , you're essentially adding (or
updating) attributes to the instance dynamically.

186
📌 In the Context of the Adapter Class: The Adapter class is designed to take an object ( obj )
and a dictionary of methods ( adapted_methods ) that need to be adapted. The goal is to allow the
object to have a method (or methods) that it didn't originally have or to "rename" its existing
methods.

def __init__(self, obj, adapted_methods):


self.obj = obj
self.__dict__.update(adapted_methods)

In the above code, self.__dict__.update(adapted_methods) dynamically adds the methods


from the adapted_methods dictionary to the instance. This is how the Adapter class can "rename"
or "alias" methods.

📌 Breaking Down the main() Function: In the main() function, for the Musician and Dancer
objects, the adapted_methods dictionary is created to map the organize_event method to the
respective play or dance method:

if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)

Then, the Adapter class is used to wrap the object:

obj = Adapter(obj, adapted_methods)

At this point, due to the self.__dict__.update(adapted_methods) line in the Adapter class, the
obj (whether it's a Musician or Dancer ) now has an organize_event method. This method will
internally call the object's play or dance method, respectively.

📌 Summary: The line self.__dict__.update(adapted_methods) in the Adapter class is crucial


for the Adapter pattern's implementation in this code. It allows the Adapter to dynamically add or
"rename" methods of the wrapped object, ensuring that the client code can interact with the
Musician and Dancer objects using the same method name ( organize_event ) as it does with
the Club object.

Now, let's dive into the adapter pattern in the context of the PyTorch codebase.

📌 One of the areas where PyTorch uses the adapter pattern is in its interaction with the
underlying tensor libraries like ATen . PyTorch's tensor operations might be built on top of ATen ,
but the interface that PyTorch provides to its users is different from the raw ATen interface. This
is especially true when considering backward compatibility or when introducing new features.

ATen is fundamentally a tensor library, on top of which almost all other Python and C++ interfaces
in PyTorch are built. It provides a core Tensor class, on which many hundreds of operations are
defined.

The ATen tensor library backing PyTorch is a simple tensor library thats exposes the Tensor
operations in Torch directly in C++14. ATen's API is auto-generated from the same declarations
PyTorch uses so the two APIs will track each other over time.

187
Tensor types are resolved dynamically, such that the API is generic and does not include
templates. That is, there is one Tensor type. It can hold a CPU or CUDA Tensor, and the tensor may
have Doubles, Float, Ints, etc. This design makes it easy to write generic code without templating
everything.

📌 Let's consider a hypothetical example. Suppose ATen has a method called


multiplyMatrices(A, B) that multiplies two matrices. However, in PyTorch, we want to expose
this functionality with a method called matmul(A, B) . Instead of changing the ATen library (which
might be used by other systems as well), we can use an adapter.

class PyTorchTensorAdapter:
def __init__(self, aten_tensor):
self.aten_tensor = aten_tensor

def matmul(self, other):


return self.aten_tensor.multiplyMatrices(other)

📌 In this hypothetical example, PyTorchTensorAdapter acts as an adapter. It takes an ATen


tensor as input and exposes a matmul method which internally calls the multiplyMatrices
method of the ATen tensor.

📌 The benefit of this approach is that PyTorch can maintain its own interface without being
tightly coupled to the underlying ATen library. If ATen changes its method names or signatures in
the future, only the adapter needs to be updated. This ensures that the main PyTorch codebase
remains unaffected by such changes.

📌 Now, let's discuss the underlying principle here. The adapter pattern is essentially about
abstraction. In software design, we often want to abstract away the details of one component
when interfacing with another. This is especially true in large systems or frameworks like PyTorch,
where multiple components (like tensors, autograd, optimizers, etc.) need to interact seamlessly.
By using adapters, we can ensure that each component maintains its own independent interface
while still being able to communicate with others. This not only makes the codebase more
modular but also easier to maintain and extend.

📌 Another area where the adapter pattern can be observed in PyTorch is in its integration with
other libraries, especially NumPy. PyTorch tensors and NumPy arrays share a lot of similarities,
but they are distinct entities with different underlying implementations. However, for ease of use
and to provide a seamless experience to users who are familiar with NumPy, PyTorch provides
utilities to convert between PyTorch tensors and NumPy arrays.

📌 Consider the methods numpy() and from_numpy() . The numpy() method converts a PyTorch
tensor to a NumPy array, while from_numpy() does the opposite. Here's a brief look at how this
might be implemented:

188
class TorchTensor:
...
def numpy(self):
# Convert the PyTorch tensor to a NumPy array
return numpy_adapter.to_numpy(self)

@staticmethod
def from_numpy(numpy_array):
# Convert a NumPy array to a PyTorch tensor
return numpy_adapter.from_numpy(numpy_array)

In this hypothetical snippet, numpy_adapter acts as an adapter between PyTorch tensors and
NumPy arrays. It abstracts away the details of the conversion, allowing PyTorch tensors to
maintain their own interface while still being able to interact with NumPy arrays.

📌 The underlying principle here is interoperability. In the world of data science and machine
learning, there are many libraries, each with its strengths. Users often switch between libraries
depending on the task at hand. By providing adapters that allow for easy conversion between
different data structures, libraries like PyTorch ensure that users don't get locked into a particular
ecosystem and can leverage the best tools for the job.

In essence, the adapter pattern is pervasive in complex systems like PyTorch. It ensures that
different components, whether they are internal modules or external libraries, can work together
harmoniously without getting entangled in each other's specific implementations. This design
principle is crucial for the scalability, maintainability, and extensibility of such systems.

So now, let's dive into a more direct application of the


adapter pattern within the context of PyTorch.
Imagine you have a pre-existing system that uses PyTorch models, and it expects all models to
have a method predict() for inference. However, PyTorch models natively use the forward()
method for this purpose. If you want to integrate a new PyTorch model into this system without
modifying the system or the model, you can use the adapter pattern.

Here's a simple example:

1. Original System: This system expects any model passed to it to have a predict() method.

def system_inference(model, input_data):


return model.predict(input_data)

2. PyTorch Model: PyTorch models use the forward() method for inference.

import torch.nn as nn

class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc = nn.Linear(10, 1)

def forward(self, x):


return self.fc(x)
189
3. Adapter: This is where the adapter pattern comes into play. We'll create an adapter for our
PyTorch model that implements the predict() method.

class PyTorchModelAdapter:
def __init__(self, pytorch_model):
self.model = pytorch_model

def predict(self, input_data):


return self.model.forward(input_data)

4. Using the Adapter: Now, you can use the adapter to integrate the PyTorch model into the
original system.

# Create a PyTorch model


model = SimpleNN()

# Wrap the PyTorch model with the adapter


adapted_model = PyTorchModelAdapter(model)

# Use the adapted model in the original system


output = system_inference(adapted_model, torch.randn(1, 10))

📌 In this example, PyTorchModelAdapter acts as an adapter, bridging the gap between the
original system's expectations and the PyTorch model's native interface. The adapter wraps
around the PyTorch model and provides the predict() method expected by the system,
internally calling the model's forward() method.

📌 This approach ensures that neither the original system nor the PyTorch model needs to be
modified. The adapter pattern provides a layer of abstraction that allows the two to communicate
seamlessly.

190
🐍🚀 The factory method in Python is based on a
single function that's written to handle our object
creation task. 🐍🚀

We execute it, passing a parameter that provides information about what we want. As a result, the
object we wanted is created.

Interestingly, when we use the factory method, we don't need to know any details about how the
resulting object is implemented and where it is coming from.

The underlying idea here is that, when developing code, you may instantiate objects directly in
methods or in classes. While this is quite normal, you may want to add an extra abstraction
between the creation of the object and where it is used in your project.

You can use the Factory pattern to add that extra abstraction. Adding an extra abstraction will also
allow you to dynamically choose classes to instantiate based on some kind of logic.

Before the abstraction, your class or method would directly create a concrete class. After adding
the factory abstraction, the concrete class is now created outside of the current class/method, and
now in a subclass.

Imagine an application for designing houses and the house has a chair already added on the floor
by default. By adding the factory pattern, you could give the option to the user to choose different
chairs, and how many at runtime. Instead of the chair being hard coded into the project when it
started, the user now has the option to choose.

Adding this extra abstraction also means that the complications of instantiating extra objects can
now be hidden from the class or method that is using it.

This separation also makes your code easier to read and document.

The key Terminologies here


Concrete Creator: The client application, class or method that calls the Creator (Factory
method).

Product Interface: The interface describing the attributes and methods that the Factory will
require in order to create the final product/object.

Creator: The Factory class. Declares the Factory method that will return the object requested
from it.

Concrete Product: The object returned from the Factory. The object implements the Product
interface.

191
Let's see an example WITH and then WITHOUT the "Factory
design pattern in Python"

Code without Factory Design Pattern


Consider a simple scenario where we have a system that deals with different types of documents.
Each document type can be printed, but the way they are printed might differ.

class PDFDocument:
def print(self):
print("Printing PDF document...")

class WordDocument:
def print(self):
print("Printing Word document...")

class ExcelDocument:
def print(self):
print("Printing Excel document...")

def print_document(document_type):
if document_type == "pdf":
doc = PDFDocument()
elif document_type == "word":
doc = WordDocument()
elif document_type == "excel":
doc = ExcelDocument()
else:
raise ValueError("Unknown document type")

doc.print()

📌 The above code has a few issues:


📌 If we want to add a new document type, we have to modify the print_document function. This
violates the Open/Closed Principle, which states that software entities should be open for
extension but closed for modification.

📌 The creation logic of documents is mixed with the printing logic in the print_document
function. This makes the function less cohesive and harder to maintain.

Refactored Code with Factory Design Pattern


To solve the above issues, we can use the Factory design pattern.

from abc import ABC, abstractmethod

# Product Interface
class Document(ABC):
@abstractmethod
def print(self):
pass

# Concrete Products
192
class PDFDocument(Document):
def print(self):
print("Printing PDF document...")

class WordDocument(Document):
def print(self):
print("Printing Word document...")

class ExcelDocument(Document):
def print(self):
print("Printing Excel document...")

# Creator
class DocumentFactory:
@staticmethod
def create_document(document_type):
if document_type == "pdf":
return PDFDocument()
elif document_type == "word":
return WordDocument()
elif document_type == "excel":
return ExcelDocument()
else:
raise ValueError("Unknown document type")

# Concrete Creator
def print_document(document_type):
doc = DocumentFactory.create_document(document_type)
doc.print()

📌 Here's how the Factory design pattern solves the issues:


📌 We've separated the creation logic from the printing logic. The DocumentFactory class is
responsible for creating documents, while the print_document function is only responsible for
printing.

📌 If we want to add a new document type, we only need to modify the DocumentFactory class.
This makes our code more maintainable and adheres to the Open/Closed Principle.

📌 The Document class (Product Interface) ensures that all document types (Concrete Products)
have a print method. This provides a consistent interface for the client code (Concrete Creator).

📌 The DocumentFactory class (Creator) abstracts away the creation logic, allowing the client
code to remain unchanged even if the underlying creation logic changes.

In conclusion, by using the Factory design pattern, we've made our code more modular,
maintainable, and extensible.

193
Let's delve deeper into how the refactored code with the
Factory design pattern addresses the issues of the original
code.

Original Issues:
1. Modification Required for New Document Types: In the original code, if we wanted to
introduce a new document type, we had to modify the print_document function. This is
problematic because it violates the Open/Closed Principle.

2. Mixed Responsibilities: The print_document function in the original code was responsible
for both creating the document object and printing it. This mixing of responsibilities makes
the function less cohesive and harder to maintain.

How the Factory Design Pattern Addresses These Issues:


📌 Separation of Concerns: The Factory design pattern introduces a clear separation of
concerns. The creation of document objects is now handled by the DocumentFactory class, while
the print_document function only deals with printing. This separation ensures that each
component of the system has a single responsibility, making the code more modular and easier to
maintain.

📌 Adherence to the Open/Closed Principle: With the Factory pattern in place, if we want to
introduce a new document type, we only need to make changes to the DocumentFactory class.
The print_document function remains untouched. This means our system is now more
extensible, as it's open to extension (adding new document types) but closed for modification (no
need to modify existing functions).

📌 Consistent Interface for Document Types: The introduction of the Document class (Product
Interface) ensures that all document types (Concrete Products) implement the print method.
This provides a consistent interface for the client code, ensuring that any document type returned
by the factory can be printed without issues. This reduces the risk of runtime errors and makes
the system more robust.

📌 Abstraction of Creation Logic: The DocumentFactory class abstracts away the creation logic
of document objects. This abstraction means that the client code (in this case, the
print_document function) doesn't need to know the specifics of how each document type is
instantiated. This encapsulation of creation logic makes the system more flexible. For instance, if
the instantiation process for a particular document type changes in the future, we only need to
update the DocumentFactory class without affecting the client code.

📌 Centralized Creation Logic: By centralizing the creation logic within the DocumentFactory
class, we ensure that there's a single point of truth for object creation. This centralized approach
reduces the risk of inconsistencies and errors in the system. If there's a change in how a
document type should be instantiated, we only need to update it in one place.

In summary, the Factory design pattern provides a structured way to handle object creation. By
abstracting and centralizing this process, the pattern ensures that our code remains modular,
maintainable, and extensible. The refactored code with the Factory pattern effectively addresses
the issues present in the original code, making it more robust and future-proof.

Alright, let's dive deep into the factory method in Python!


194
📌 The factory method is a creational design pattern that provides an interface for creating
objects in a super class, but allows subclasses to alter the type of objects that will be created. In
simpler terms, it's a way to create objects without specifying the exact class of object that will be
created.

📌 The primary advantage of the factory method is abstraction. It abstracts the process of object
creation and allows the client code to be decoupled from the specific classes that are instantiated.
This means that if you want to change the object being created, you only need to modify the
factory method, not all the places in your code where the object is used.

📌 Use Cases: 1. When the exact type of the object isn't known until runtime. For instance, a GUI
library might have a button factory. Depending on the operating system, it might create a
WindowsButton, MacButton, or LinuxButton. 2. When the creation process is more complex than
just "newing" up an object. For example, if there's a need to pull from a pool of objects instead of
creating a new one (object pooling). 3. When you want to keep track of the number of objects
created, or when you want to limit the number of instances of a particular class.

📌 Now, let's look at a real-life use-case code:


Imagine you're building a system for a zoo, and you need to create different types of animals.
However, the exact type of animal might depend on some runtime data (e.g., data from a
database or user input).

class Animal:
def speak(self):
pass

class Dog(Animal):
def speak(self):
return "Woof!"

class Cat(Animal):
def speak(self):
return "Meow!"

class Fish(Animal):
def speak(self):
return "..."

def animal_factory(animal_type):
if animal_type == "Dog":
return Dog()
elif animal_type == "Cat":
return Cat()
elif animal_type == "Fish":
return Fish()
else:
raise ValueError(f"Unknown animal type: {animal_type}")

# Usage
animal = animal_factory("Dog")
print(animal.speak()) # Outputs: Woof!

📌 In the above code:


195
We have an abstract base class Animal with a method speak . This method is overridden by
the subclasses Dog , Cat , and Fish .

The animal_factory function is our factory method. It takes an animal_type as a


parameter and returns an instance of the corresponding animal class.

The client code (the usage part) doesn't need to know about the specific animal classes. It just
calls the factory method and gets an animal object. This decouples the object creation from
the client code.

📌 Under the hood: - The factory method pattern leverages polymorphism. The client code
interacts with the base class ( Animal in our case), but the actual object returned is one of its
subclasses. - This pattern promotes the open/closed principle. If you want to add a new animal
type in the future, you can just add a new subclass and modify the factory method. The existing
client code doesn't need to change.

📌 In conclusion, the factory method pattern is a powerful tool for abstracting object creation. It
promotes code reusability, decoupling, and scalability. By understanding and leveraging this
pattern, you can write more maintainable and flexible code.

Example - 2 for factory method


First, let's define a base class for all of our shapes.

import abc
class Shape(metaclass=abc.ABCMeta):
@abc.abstractmethod
def calculate_area(self):
pass

@abc.abstractmethod
def calculate_perimeter(self):
pass

Now, WITHOUT a factory function we need to create several concrete, more specific shapes:

class Rectangle(Shape):
def __init__(self, height, width):
self.height = height
self.width = width

def calculate_area(self):
return self.height * self.width

def calculate_perimeter(self):
return 2 * (self.height + self.width)

class Square(Shape):
def __init__(self, width):
self.width = width

def calculate_area(self):
return self.width ** 2

196
def calculate_perimeter(self):
return 4 * self.width

class Circle(Shape):
def __init__(self, radius):
self.radius = radius

def calculate_area(self):
return 3.14 * self.radius * self.radius

def calculate_perimeter(self):
return 2 * 3.14 * self.radius

So far, we have created an abstract class and extended it to suit different shapes that will be
available in our library.

BUT issue now, in order to create the different shape objects, clients will have to know the names
and details of our shapes and separately perform the creation.

This is where the Factory Method comes into play.

The Factory Method design pattern will help us abstract the available shapes from the client,
i.e. the client does not have to know all the shapes available, but rather only create what they
need during runtime. It will also allow us to centralize and encapsulate the object creation.

Let us achieve this by creating a ShapeFactory that will be used to create the specific shape classes
based on the client's input:

class ShapeFactory:
def create_shape(self, name):
if name == 'circle':
radius = input("Enter the radius of the circle: ")
return Circle(float(radius))

elif name == 'rectangle':


height = input("Enter the height of the rectangle: ")
width = input("Enter the width of the rectangle: ")
return Rectangle(int(height), int(width))

elif name == 'square':


width = input("Enter the width of the square: ")
return Square(int(width))

This is our interface for creation. We don't call the constructors of concrete classes, we call the
Factory and ask it to create a shape.

Our ShapeFactory works by receiving information about a shape such as a name and the required
dimensions. Our factory method create_shape() will then be used to create and return ready
objects of the desired shapes.

The client doesn't have to know anything about the object creation or specifics. Using the factory
object, they can create objects with minimal knowledge of how they work:

197
def shapes_client():
shape_factory = ShapeFactory()
shape_name = input("Enter the name of the shape: ")

shape = shape_factory.create_shape(shape_name)

print(f"The type of object created: {type(shape)}")


print(f"The area of the {shape_name} is: {shape.calculate_area()}")
print(f"The perimeter of the {shape_name} is: {shape.calculate_perimeter()}")

The above example is a classic demonstration of the Factory Method pattern in action. Let's delve
into the details of how the Factory Method aids in this scenario:

📌 Abstraction of Object Creation: The Factory Method pattern abstracts the process of object
creation from the client. In the example, the client doesn't directly instantiate the Circle ,
Rectangle , or Square classes. Instead, the client interacts with the ShapeFactory to request a
shape. The factory then takes care of the creation details.

📌 Centralization of Object Creation: All the logic related to object creation is centralized in the
ShapeFactory . This means that if there's a change in how a shape is created or if a new shape is
added, only the factory needs to be updated. The client code remains unaffected. This
centralization promotes maintainability.

📌 Encapsulation: The Factory Method pattern encapsulates the creation logic. In the example,
the client doesn't need to know the constructors of the concrete shape classes or their specific
parameters. The factory encapsulates these details, asking the client only for the necessary
information through user input.

📌 Flexibility: The Factory Method pattern provides flexibility in terms of object creation. If, in the
future, a new shape like Triangle is introduced, the ShapeFactory can be easily extended to
support it without affecting existing client code.

📌 Consistent Interface: The client interacts with a consistent interface, i.e., the create_shape
method of the ShapeFactory . This method provides a unified way to create any shape. The client
doesn't need to remember different constructors or initialization parameters for different shapes.

📌 Decoupling: The Factory Method pattern decouples the client from the concrete classes. In the
example, the shapes_client function doesn't have any direct dependencies on Circle ,
Rectangle , or Square . It only depends on the abstract Shape class and the ShapeFactory . This
decoupling means that the concrete shape classes can be modified, replaced, or extended without
affecting the client code.

📌 Dynamic Runtime Creation: The Factory Method pattern allows for dynamic object creation
at runtime based on user input or other conditions. In the example, the shape to be created is
determined by the user's input during runtime. The factory then dynamically creates the
appropriate shape object.

Example - 3
📌 Revisiting the main concept of Factory Design Pattern which is to create a pattern that provides
an interface for creating objects in a super class, but allows subclasses to alter the type of objects
that will be created. In simpler terms, it's a way to create objects without specifying the exact class
of object that will be created. The main goal of the Factory Pattern is to decouple the creation of
198
objects from the client that needs them.

Now checkout the below code here

Consider that we're developing a ticketing system where we want our users to generate various
ticket types. As we can't predict the exact ticket types a user might want, we need a flexible
solution. The factory method provides us with a standardized interface for ticket creation. At this
initial stage, we're only supporting two ticket types: incident and problem. However, we plan to
introduce more types later. The beauty of the factory method is that it lets us add new specific
classes swiftly without altering the user's existing code.

from abc import ABC, abstractmethod

class Ticket(ABC):
@abstractmethod
def ticket_type():
pass

class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class ProblemTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class ServiceRequest(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class TicketFactory:
@staticmethod
def create_ticket(t_type):

tickets = {
'incident' : IncidentTicket,
'problem': ProblemTicket,
'servicerequest' : ServiceRequest
}

assert t_type in tickets, f'Ticket type "{t_type}" is not supported'


return tickets[t_type]

def client_code(ticket_type):
factory = TicketFactory()
ticket = factory.create_ticket(ticket_type)
print(ticket.ticket_type())

if __name__ == '__main__':
client_code('incident')
client_code('problem')
client_code('servicerequest')

199
IncidentTicket has been created
ProblemTicket has been created
ServiceRequest has been created

📌 The abstract base class Ticket is defined using Python's ABC (Abstract Base Class) module.
This class has an abstract method ticket_type() . The use of the @abstractmethod decorator
indicates that any subclass of Ticket must provide an implementation for this method. This
ensures that all ticket types will have a consistent interface.

from abc import ABC, abstractmethod

class Ticket(ABC):
@abstractmethod
def ticket_type():
pass

Let's delve deeper into the concept of Abstract Base Classes (ABCs) in
Python and how they work.

📌 Abstract Base Classes (ABCs): ABCs are a mechanism in Python for defining abstract classes
where you can't create an instance of the class itself, but you can create instances of its
subclasses. The primary purpose of ABCs is to define a set of common methods that must be
implemented by any of its subclasses. This ensures a consistent interface across all subclasses.

📌 The ABC Module: Python provides the abc module to facilitate the creation of abstract base
classes. The key components of this module are the ABC class and the abstractmethod
decorator.

📌 Defining an Abstract Base Class: To define an abstract base class, you subclass from ABC . In
the provided code, Ticket is defined as an abstract base class by inheriting from ABC :

from abc import ABC, abstractmethod

class Ticket(ABC):
...

📌 Abstract Methods: An abstract method is a method that is declared in the abstract base class
but doesn't have any implementation. It's a way of saying, "Any class that inherits from this ABC
must provide an implementation for this method." In Python, you declare an abstract method
using the @abstractmethod decorator.

In the provided code, ticket_type is defined as an abstract method within the Ticket class:

@abstractmethod
def ticket_type():
pass

The pass statement is a placeholder, indicating that there's no implementation for this method in
the Ticket class.

200
📌 Subclassing an ABC: When you create a subclass of an ABC, you are contractually obligated to
provide implementations for all of its abstract methods. If you don't, Python will raise a
TypeError when you try to create an instance of the subclass.

In the provided code, IncidentTicket , ProblemTicket , and ServiceRequest are subclasses of


the Ticket ABC. Each of these subclasses provides its own implementation of the ticket_type
method:

class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

📌 Instantiation: You cannot create an instance of an abstract base class. If you try to do so,
Python will raise a TypeError . However, you can create instances of its subclasses, provided they
implement all the abstract methods.

For example, in the provided code, you can't create an instance of Ticket directly, but you can
create instances of IncidentTicket , ProblemTicket , or ServiceRequest .

📌 Why Use ABCs?: ABCs are a powerful tool for ensuring that a set of related classes adhere to a
particular interface. By defining an ABC, you're setting a clear contract: "Any subclass of this ABC
must implement these methods." This can make your code more robust and maintainable, as it
ensures consistency across related classes.

In the context of the provided code, using an ABC ensures that any new ticket type added in the
future will have the ticket_type method, maintaining consistency across all ticket types.

📌 Three concrete classes ( IncidentTicket , ProblemTicket , and ServiceRequest ) are defined,


each inheriting from the Ticket abstract base class. These classes provide concrete
implementations of the ticket_type() method. The use of f'{__class__.__name__} has been
created' in the return statement is a way to dynamically fetch the name of the class and include
it in the returned string.

📌 The TicketFactory class is the heart of the Factory Design Pattern in this code. It has a static
method create_ticket(t_type) . Static methods, denoted by the @staticmethod decorator,
belong to the class and not any specific instance. This means they can be called on the class itself,
without creating an instance.

📌 Inside the create_ticket method, a dictionary named tickets is defined. This dictionary
maps string keys (representing ticket types) to their corresponding classes. This dictionary acts as
a registry of supported ticket types.

📌 The assert statement checks if the provided t_type exists in the tickets dictionary. If the
ticket type is not supported, it raises an AssertionError with a custom message. This is a simple
way to validate input and ensure that only supported ticket types are processed. However, in a
more robust implementation, one might use exception handling with try and except blocks.

📌 If the ticket type is valid, the method returns the corresponding class from the tickets
dictionary. Note that it returns the class itself, not an instance of the class. This is because the
client code might want to further customize the object or use class methods before instantiation.

201
📌 The client_code function demonstrates how to use the factory. It creates an instance of the
TicketFactory (though, technically, since create_ticket is a static method, this instantiation is
not necessary). It then calls the create_ticket method with a ticket type string, gets the
corresponding class, creates an instance of that class, and finally calls the ticket_type() method
on that instance.

📌 The if __name__ == '__main__': block is a common Python idiom to ensure that the code is
only executed when the script is run directly, and not when it's imported as a module. In this
block, the client_code function is called three times with different ticket types to demonstrate
the functionality of the factory.

In summary, this code provides a clean and scalable implementation of the Factory Design Pattern
in Python. It ensures that the creation of ticket objects is decoupled from the client code, allowing
for easy addition of new ticket types in the future without affecting existing code.

Now let's see about the benefits of the above


Factory implementation in this code here:
from abc import ABC, abstractmethod

class Ticket(ABC):
@abstractmethod
def ticket_type():
pass

class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class ProblemTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class ServiceRequest(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'

class TicketFactory:
@staticmethod
def create_ticket(t_type):

tickets = {
'incident' : IncidentTicket,
'problem': ProblemTicket,
'servicerequest' : ServiceRequest
}

assert t_type in tickets, f'Ticket type "{t_type}" is not supported'


return tickets[t_type]

def client_code(ticket_type):
factory = TicketFactory()
ticket = factory.create_ticket(ticket_type)
202
print(ticket.ticket_type())

if __name__ == '__main__':
client_code('incident')
client_code('problem')
client_code('servicerequest')

IncidentTicket has been created


ProblemTicket has been created
ServiceRequest has been created

📌 Decoupling Object Creation from its Use: The Factory pattern decouples the creation of
objects from the parts of the code that use these objects. This means that the client_code
function doesn't need to know about the specific classes ( IncidentTicket , ProblemTicket ,
ServiceRequest ). It only interacts with the TicketFactory .

Example: In the future, if a new ticket type, say FeedbackTicket , is introduced, you only need to
modify the TicketFactory by adding an entry in the tickets dictionary. The client_code
remains unchanged, demonstrating the decoupling.

📌 Centralized Object Creation: All the logic related to creating ticket objects is centralized in the
TicketFactory . This makes the codebase easier to maintain and debug. If there's an issue with
object creation or if enhancements are needed, you only have to look in one place.

Example: Suppose you decide to log every ticket creation for auditing purposes. Instead of adding
logging code in each ticket class, you can simply add it once in the TicketFactory .

📌 Flexibility in Object Creation: The Factory provides flexibility in terms of how objects are
created. This is especially beneficial when object creation is complex or involves multiple steps.

Example: Imagine a scenario where creating a ServiceRequest ticket requires additional steps,
like fetching some data from a database or an API. You can easily implement these steps in the
TicketFactory without affecting other ticket types or the client code.

📌 Consistent Error Handling: By centralizing object creation, you can also centralize error
handling. In the provided code, the assert statement checks if a given ticket type is supported.
This ensures that errors related to unsupported ticket types are handled consistently.

Example: If a developer mistakenly tries to create a ticket type called 'urgentissue', the Factory will
raise an error with the message "Ticket type 'urgentissue' is not supported". This consistent error
handling can be especially useful for debugging and user feedback.

📌 Scalability: The Factory pattern makes the system more scalable. As the system grows and
more ticket types are introduced, the Factory can easily accommodate these changes.

Example: In the future, if the system needs to support dozens of ticket types, the Factory can be
extended to fetch the supported ticket types from a configuration file or a database. This dynamic
approach would allow adding new ticket types without even touching the code.

In summary, the Factory implementation in the provided code offers benefits like decoupling,
centralized object creation and error handling, flexibility, and scalability. These benefits make the
system robust, maintainable, and ready for future enhancements.

203
So let's do one of the above case of extending TicketFactory
class
Take one of the above scenarios - Imagine a scenario where creating a ServiceRequest ticket
requires additional steps, like fetching some data from a database or an API. You can easily
implement these steps in the TicketFactory without affecting other ticket types or the client
code.

For simplicity, I'll simulate the database/API fetch with a function. Here's how you can modify the
TicketFactory to accommodate this:

# ... [Other code remains unchanged]

# Simulating a database or API fetch


def fetch_data_for_service_request():
# In a real-world scenario, this function might connect to a database or make
an API call.
# For this example, let's assume it returns some additional data.
return "Additional data fetched for ServiceRequest"

class ServiceRequest(Ticket):
def __init__(self, data):
self.data = data

def ticket_type(self):
return f'{__class__.__name__} with {self.data} has been created'

class TicketFactory:
@staticmethod
def create_ticket(t_type):

tickets = {
'incident': IncidentTicket,
'problem': ProblemTicket,
'servicerequest': ServiceRequest
}

assert t_type in tickets, f'Ticket type "{t_type}" is not supported'

# If the ticket type is 'servicerequest', fetch additional data


if t_type == 'servicerequest':
data = fetch_data_for_service_request()
return tickets[t_type](data)
else:
return tickets[t_type]()

# ... [Rest of the code remains unchanged]

Here's a breakdown of the changes:

1. The fetch_data_for_service_request function simulates fetching data for a


ServiceRequest ticket.

204
2. The ServiceRequest class now has an __init__ method that accepts data as an
argument. This data is used when returning the ticket type.

3. Inside the TicketFactory , we check if the t_type is 'servicerequest'. If it is, we fetch the
additional data and pass it when creating the ServiceRequest object.

4. For other ticket types, the creation remains unchanged.

With these modifications, the Factory handles the special requirements of creating a
ServiceRequest ticket without affecting other ticket types or the client code.

Let's do another scenario of extending TicketFactory class


Suppose you decide to log every ticket creation for auditing purposes. Instead of adding
logging code in each ticket class, you can simply add it once in the TicketFactory .

Here's how you can modify the TicketFactory to add logging:

import logging
from abc import ABC, abstractmethod

# Setting up the logging configuration


logging.basicConfig(level=logging.INFO)

# ... [Other code remains unchanged]

class TicketFactory:
@staticmethod
def create_ticket(t_type):

tickets = {
'incident': IncidentTicket,
'problem': ProblemTicket,
'servicerequest': ServiceRequest
}

assert t_type in tickets, f'Ticket type "{t_type}" is not supported'

# Logging the ticket creation


logging.info(f"Creating a ticket of type: {t_type}")

return tickets[t_type]()

# ... [Rest of the code remains unchanged]

Here's a breakdown of the changes:

1. We import the logging module and set up a basic configuration using


logging.basicConfig . This will display log messages of level INFO and above to the
console.

2. Inside the TicketFactory , right before we return the ticket object, we add a logging
statement using logging.info() . This logs the creation of a ticket of a specific type.

205
With this modification, every time a ticket is created using the TicketFactory , a log message will
be generated, providing an audit trail of ticket creations. This centralized logging approach
ensures that you don't have to add logging statements in each individual ticket class, making the
code cleaner and more maintainable.

Let's do even another scenario of extending TicketFactory


class
Example: In the future, if the system needs to support dozens of ticket types, the Factory can be
extended to fetch the supported ticket types from a configuration file or a database. This dynamic
approach would allow adding new ticket types without even touching the code.

Let's demonstrate how the TicketFactory can be extended to fetch supported ticket types from
a configuration file. For simplicity, I'll use a JSON file as the configuration file, but in real-world
scenarios, this could be a database, an XML file, or any other data source.

1. Configuration File (tickets_config.json):


2.

{
"incident": "IncidentTicket",
"problem": "ProblemTicket",
"servicerequest": "ServiceRequest",
"feedback": "FeedbackTicket"
}

In this JSON file, we've added a new ticket type "feedback" mapped to a class "FeedbackTicket".

1. Python Code:

import json
from abc import ABC, abstractmethod

# ... [Other classes like Ticket, IncidentTicket, etc. remain unchanged]

class FeedbackTicket(Ticket):
def ticket_type(self):
return f'{__class__.__name__} has been created'

class TicketFactory:
@staticmethod
def create_ticket(t_type):
# Load ticket types from the configuration file
with open('tickets_config.json', 'r') as file:
tickets_config = json.load(file)

# Dynamically map ticket type strings to actual class objects


tickets = {
ticket_type: globals()[class_name] for ticket_type, class_name in
tickets_config.items()
}

assert t_type in tickets, f'Ticket type "{t_type}" is not supported'


206
return tickets[t_type]()

# ... [Rest of the code remains unchanged]

Here's a breakdown of the changes:

1. We added a new class FeedbackTicket to represent the new ticket type.

2. Inside the TicketFactory , we load the ticket types from the tickets_config.json file
using the json module.

3. We then dynamically map the ticket type strings from the configuration file to the actual class
objects using Python's globals() function. This function returns a dictionary of the current
global symbol table, allowing us to fetch class references by their string names.

4. The rest of the TicketFactory remains unchanged, as it uses the dynamically constructed
tickets dictionary to create the desired ticket object.

With this approach, adding a new ticket type is as simple as updating the tickets_config.json
file and adding the corresponding class in the Python code. The TicketFactory will automatically
support the new ticket type without any modifications.

New Example using factory methods for database


connections in batch data pipeline applications.
Let's imagine you're building a data pipeline for an e-commerce company that processes user
orders. Depending on the environment (development, staging, production), you want to connect
to different databases.

from abc import ABC, abstractmethod


import sqlite3
import psycopg2 # Assuming you're using PostgreSQL for production

class DBConnectionFactory(ABC):

@abstractmethod
def create_connection(self):
pass

class SQLiteConnectionFactory(DBConnectionFactory):

def create_connection(self):
return sqlite3.connect('development.db')

class PostgreSQLConnectionFactory(DBConnectionFactory):

def create_connection(self):
return psycopg2.connect(database="production", user="user",
password="password", host="127.0.0.1", port="5432")

def get_factory(environment):
if environment == "development":
return SQLiteConnectionFactory()

207
elif environment == "production":
return PostgreSQLConnectionFactory()
else:
raise ValueError(f"Unknown environment: {environment}")

# Usage:

environment = "development" # This could be set based on some configuration or


environment variable
factory = get_factory(environment)
connection = factory.create_connection()

# Now, you can use this connection to query the database, etc.

📌 Code Explanation:
1. We define an abstract base class DBConnectionFactory with an abstract method
create_connection . This sets the contract that any concrete factory we create must provide
a method to create a DB connection.

2. We then define two concrete factories: SQLiteConnectionFactory for a development


environment and PostgreSQLConnectionFactory for a production environment. Each of
these factories knows how to create a connection for its specific database.

3. The get_factory function is a simple utility that returns the appropriate factory based on
the environment. This function can be expanded as you add more environments or
databases.

4. In the usage section, we determine the environment (this could be from an environment
variable, configuration file, etc.), get the appropriate factory, and then create the database
connection.

📌 Benefits:
1. The main application code doesn't need to know the specifics of how to connect to each
database. It just asks the factory for a connection.

2. If you ever need to change how the connection is made for a specific environment or if you
want to introduce a new type of database, you only need to modify or add a new factory. The
main application code remains untouched.

📌 Isolation of Concerns: This approach ensures that the logic for creating a database
connection is isolated from the rest of the application. This makes the codebase more
maintainable and reduces the risk of introducing bugs when making changes related to database
connections.

📌 Scalability: In the future, if you decide to introduce connection pooling, caching, or any other
enhancements, you can do so within the respective factory. For instance, if you decide to use a
connection pool for the PostgreSQL connections in production, you can integrate that logic within
the PostgreSQLConnectionFactory without affecting the SQLite connections or any other part of
the application.

📌 Under-the-hood: When you request a connection from a database, there's a lot happening
behind the scenes. The system needs to establish a TCP connection, authenticate, and set up the
session. This can be resource-intensive. By managing connections efficiently (like reusing them
from a pool), you can significantly improve the performance of your application. The factory

208
method pattern doesn't directly deal with these concerns, but by isolating the creation logic, it
provides a centralized place to manage them.

📌 Testing and Mocking: Another advantage of this approach is that it makes testing easier.
When writing unit tests, you can create a mock factory that produces mock database connections,
allowing you to test your data processing logic without actually hitting a real database.

Advanced Use-case:

Let's say you want to introduce connection pooling for the PostgreSQL database in the production
environment. Here's how you can modify the PostgreSQLConnectionFactory to use a connection
pool:

from psycopg2 import pool

class PostgreSQLConnectionFactory(DBConnectionFactory):

def __init__(self):
self.minconn = 5
self.maxconn = 20
self.connection_pool = None

def create_connection(self):
if not self.connection_pool:
self.connection_pool = pool.SimpleConnectionPool(self.minconn,
self.maxconn,

database="production", user="user", password="password", host="127.0.0.1",


port="5432")
return self.connection_pool.getconn()

def release_connection(self, conn):


if self.connection_pool:
self.connection_pool.putconn(conn)

📌 Code Explanation:
1. We've added a connection pool to the PostgreSQLConnectionFactory . The
create_connection method now fetches a connection from the pool instead of creating a
new one every time.

2. The release_connection method is used to return a connection back to the pool once
you're done with it.

3. The connection pool is initialized lazily, i.e., it's created the first time you request a
connection. This ensures that if your application never needs a PostgreSQL connection, the
pool is never created.

📌 Usage Consideration: With connection pooling, it's crucial to remember to return the
connection to the pool once you're done with it. Otherwise, you'll exhaust the pool over time. This
is a responsibility that the main application code must bear, but the benefit is a more efficient use
of resources, especially under high load.

209
In conclusion, using a factory method for database connections in batch data pipeline applications
provides flexibility, maintainability, and efficiency. It abstracts away the specifics of connecting to
different databases, allowing you to focus on the core logic of your application.

🐍🚀 Proxy Design Pattern in Python 🐍🚀

Proxy Design Pattern in Python is a structural design pattern that lets you provide a substitute or
placeholder for another object. A proxy controls access to the original object, allowing you to
perform something either before or after the request gets through to the original object.

📌 This means that instead of directly interacting with the original object, you interact with the
proxy, which then decides how and when to forward the request to the original object.

📌 Use Cases:
1. Lazy Initialization: When an object is heavy and consumes a lot of resources, you might not
want to create it unless it's really needed. The proxy can delay the instantiation of the original
object until it's absolutely necessary.

2. Access Control: If you want to restrict access to the original object based on certain
conditions, a proxy can be used. For instance, checking if a user has the necessary
permissions before allowing a certain operation.

3. Logging and Monitoring: Before or after forwarding a request to the original object, the
proxy can log details about the request, which can be useful for debugging or monitoring
purposes.

4. Performance Measurement: The proxy can record the time it takes to execute operations,
giving insights into performance bottlenecks.

210
Let's see an example WITH and then WITHOUT the "Proxy
Design Pattern in Python"

1. Code without Proxy Design Pattern:


Imagine a scenario where we have a Database class that allows us to perform CRUD (Create,
Read, Update, Delete) operations. For simplicity, let's consider only the read operation.

class Database:
def __init__(self):
self.data = {}

def read(self, key):


return self.data.get(key, "Data not found")

def insert(self, key, value):


self.data[key] = value

Now, let's say we have a client code that interacts with this database:

db = Database()
db.insert("key1", "value1")
print(db.read("key1"))

📌 The above code works fine, but there are some issues:
📌 There's no control over who can access the database. Any part of the code can read or write to
the database directly.

📌 If we want to add some logging mechanism to log every read operation, we'd have to modify
the Database class, which violates the Open/Closed principle.

📌 If we want to add a caching mechanism or any other pre/post-processing, we'd again have to
modify the Database class.

2. Refactoring with Proxy Design Pattern:


To address the above issues, we can introduce a DatabaseProxy class that will act as a proxy for
our Database class.

class DatabaseProxy:
def __init__(self, database):
self.database = database
self.access_count = 0

def read(self, key):


self.access_count += 1
print(f"Access count: {self.access_count}")
return self.database.read(key)

def insert(self, key, value):


self.database.insert(key, value)

211
Now, the client code will interact with the DatabaseProxy instead of the Database directly:

db = Database()
proxy = DatabaseProxy(db)
proxy.insert("key1", "value1")
print(proxy.read("key1"))
print(proxy.read("key2"))

📌 With this approach, we've added a layer of control. The client code interacts with the proxy,
and the proxy decides how and when to forward the request to the original Database object.

📌 We've added a logging mechanism (the access count) without modifying the original Database
class.

📌 In the future, if we want to add more features like caching, we can easily do that in the
DatabaseProxy class without touching the Database class.

📌 The Database class remains unchanged, and we've adhered to the Open/Closed principle.

In conclusion, the Proxy Design Pattern provides a way to control access to an object by acting as
an intermediary. This pattern is especially useful when we want to add additional functionalities to
an object without modifying its structure.

Let's break down the refactored code with the Proxy Design
Pattern and see how it addresses the issues of the original
code.

Original Issues:
📌 No control over who can access the database.
📌 No logging mechanism without modifying the Database class.

📌 Difficulty in adding other pre/post-processing mechanisms without altering the Database


class.

Refactored Code with Proxy Design Pattern:

class DatabaseProxy:
def __init__(self, database):
self.database = database
self.access_count = 0

def read(self, key):


self.access_count += 1
print(f"Access count: {self.access_count}")
return self.database.read(key)

def insert(self, key, value):


self.database.insert(key, value)

212
Detailed Explanation:
📌 Control Over Access: In the refactored code, the DatabaseProxy acts as an intermediary
between the client and the actual Database object. This means that any client wanting to interact
with the database will have to go through the proxy. By doing this, we can control, restrict, or
modify the access as needed. For instance, if we wanted to limit the number of reads to the
database, we could easily implement that logic within the proxy.

📌 Logging Mechanism: One of the issues with the original code was the inability to add a
logging mechanism without modifying the Database class. With the proxy in place, we've
introduced an access_count attribute that keeps track of the number of times the read method
is called. Every time a client tries to read from the database, the proxy increments this count and
prints it. This is a simple form of logging, and it's implemented without touching the original
Database class. If we wanted more advanced logging, such as timestamped logs or logs for
different types of operations, we could easily expand upon this within the proxy.

📌 Ease of Adding Pre/Post-Processing: The proxy pattern shines when we think about adding
additional functionalities around the main operation. For instance, if we wanted to introduce a
caching mechanism, we could implement it within the proxy. Before forwarding a read request to
the actual database, the proxy could check if the data is already in the cache. If it is, return the
cached data; if not, fetch from the database, store it in the cache, and then return it. This caching
logic can be added to the proxy without altering the Database class. Similarly, any other pre/post-
processing can be introduced in the proxy, ensuring the original class remains untouched.

In essence, the Proxy Design Pattern has provided a flexible and scalable structure. It allows for
the addition of functionalities and controls without modifying the core object, adhering to the
Open/Closed principle of software design.

📌 Let's consider a real-life use-case: Imagine you're building a system for a library. Books in the
library can be either physical or digital. Access to digital books requires a special membership. We
can use the Proxy Design Pattern to control access to digital books.

class Book:
def __init__(self, title, content):
self.title = title
self.content = content

def display(self):
return self.content

class DigitalBookProxy:
def __init__(self, book):
self._book = book
self._authenticated = False

def authenticate(self, password):


if password == "SpecialAccess":
self._authenticated = True
else:
print("Authentication failed!")

def display(self):
if self._authenticated:
213
return self._book.display()
else:
return "Access Denied! Authenticate first."

# Usage
book = Book("Digital Python", "This is the content of the digital book.")
proxy = DigitalBookProxy(book)

print(proxy.display()) # Access Denied! Authenticate first.


proxy.authenticate("wrong_password") # Authentication failed!
print(proxy.display()) # Access Denied! Authenticate first.
proxy.authenticate("SpecialAccess")
print(proxy.display()) # This is the content of the digital book.

📌 What the code does:


1. We have a simple Book class that represents both physical and digital books. It has a
display method to show the content.

2. The DigitalBookProxy acts as a proxy for the Book class. It has an authenticate method
to check if the user has access to the digital book.

3. In the usage section, we create a digital book and a proxy for it. Without authentication, the
proxy denies access. Once authenticated with the correct password, the proxy grants access
to the book's content.

📌 Under the hood: - The proxy pattern here decouples the authentication logic from the Book
class. This means the Book class remains focused on its primary responsibility: representing a
book. - The proxy acts as an intermediary and adds an additional layer of control, in this case,
authentication. This separation of concerns ensures that each class adheres to the Single
Responsibility Principle, a key principle in object-oriented design.

In essence, the Proxy Design Pattern provides a way to add additional behaviors or controls to
object access without modifying the object's actual implementation. This makes the system more
modular and easier to maintain.

Example-2 - Real life use case of Proxy Design Pattern in


Python
Let's consider a scenario: a video streaming platform, similar to YouTube or Netflix. Streaming
videos can consume a lot of bandwidth, and not all videos are available for all regions due to
licensing restrictions. We can use the Proxy Design Pattern to handle these complexities.

📌 Scenario: A video streaming platform where: 1. Videos might not be available in all regions. 2.
Users need to have an active subscription to view premium content. 3. We want to lazily load the
video only when it's actually requested to save bandwidth.

class Video:
def __init__(self, title, content):
self.title = title
self.content = content

def play(self):
return f"Playing {self.title}: {self.content}"
214
class VideoProxy:
def __init__(self, title):
self.title = title
self._video = None
self._region = "US"
self._premium_content = ["Exclusive Show", "VIP Movie"]
self._subscribed = False

def set_region(self, region):


self._region = region

def subscribe(self):
self._subscribed = True

def play(self):
if self.title in self._premium_content and not self._subscribed:
return "This is premium content. Please subscribe to view."

if self._region == "EU" and self.title == "Restricted Show":


return "This video is not available in your region."

if not self._video:
# Simulating lazy loading. In a real-world scenario, this might
involve fetching the video from a server.
content = f"Content of {self.title}"
self._video = Video(self.title, content)

return self._video.play()

# Usage
proxy = VideoProxy("Exclusive Show")
print(proxy.play()) # This is premium content. Please subscribe to view.

proxy.subscribe()
print(proxy.play()) # Playing Exclusive Show: Content of Exclusive Show.

proxy2 = VideoProxy("Restricted Show")


proxy2.set_region("EU")
print(proxy2.play()) # This video is not available in your region.

📌 What the code does: 1. The Video class represents a video with a title and content. It has a
play method to simulate playing the video. 2. The VideoProxy class acts as a proxy for the
Video class. It handles region restrictions, subscription checks, and lazy loading of the video. 3. In
the usage section, we demonstrate the proxy's behavior for premium content and region-
restricted content.

📌 Under the hood: - The proxy pattern allows us to separate concerns. The Video class remains
simple and focused on representing a video. The complexities of region restrictions, subscription
checks, and lazy loading are handled by the proxy. - Lazy loading is implemented by only creating
an instance of the Video class when the play method is called. This simulates the behavior of
only fetching/loading the video when it's actually requested. - The proxy pattern provides
flexibility. If in the future, more rules or behaviors need to be added (e.g., age restrictions, content
warnings), they can be added to the proxy without altering the Video class.
215
This example showcases how the Proxy Design Pattern can be used to manage complexities in a
system, ensuring that each component remains focused on its primary responsibility.

Example- 3 - Real life use case of Proxy Design Pattern in


Python
Let's delve into a scenario involving a cloud storage system, similar to Google Drive or Dropbox. In
such systems, users upload and download files. However, downloading large files can be resource-
intensive. Additionally, there might be access controls in place, rate limiting, and logging
requirements. We can use the Proxy Design Pattern to manage these complexities.

📌 Scenario: A cloud storage system where: 1. Large files are lazily loaded to save bandwidth and
memory. 2. Users need authentication to download files. 3. There's a rate limit on how often a
user can download files. 4. All download requests are logged for audit purposes.

import time

class File:
def __init__(self, name, content):
self.name = name
self.content = content

def download(self):
return f"Downloading {self.name}: {self.content}"

class FileProxy:
def __init__(self, name):
self.name = name
self._file = None
self._last_access_time = None
self._authenticated = False
self._rate_limit_seconds = 10

def authenticate(self, token):


# Simple authentication simulation
if token == "SecureToken":
self._authenticated = True
return "Authenticated successfully!"
return "Authentication failed!"

def download(self):
current_time = time.time()

if not self._authenticated:
return "Authentication required to download the file."

if self._last_access_time and (current_time - self._last_access_time) <


self._rate_limit_seconds:
return "Rate limit exceeded. Please wait before downloading again."

if not self._file:
# Simulating lazy loading. In a real-world scenario, this might
involve fetching the file from a server.
216
content = f"Content of {self.name}"
self._file = File(self.name, content)

self._last_access_time = current_time
self._log_request()
return self._file.download()

def _log_request(self):
# Simulating logging. In a real-world scenario, this might involve
writing to a database or logging service.
print(f"File {self.name} was downloaded at
{time.ctime(self._last_access_time)}")

# Usage
proxy = FileProxy("BigDataFile.txt")
print(proxy.download()) # Authentication required to download the file.

proxy.authenticate("WrongToken") # Authentication failed!


print(proxy.download()) # Authentication required to download the file.

proxy.authenticate("SecureToken") # Authenticated successfully!


print(proxy.download()) # File BigDataFile.txt was downloaded at [current time].
Downloading BigDataFile.txt: Content of BigDataFile.txt.

time.sleep(5)
print(proxy.download()) # Rate limit exceeded. Please wait before downloading
again.

📌 What the code does: 1. The File class represents a file with a name and content. It has a
download method to simulate downloading the file. 2. The FileProxy class acts as a proxy for
the File class. It handles authentication, rate limiting, lazy loading, and logging. 3. In the usage
section, we demonstrate the proxy's behavior for authentication, rate limiting, and logging.

📌 Under the hood: - The proxy pattern allows us to separate concerns. The File class remains
simple and focused on representing a file. The complexities of authentication, rate limiting, lazy
loading, and logging are handled by the proxy. - Lazy loading is implemented by only creating an
instance of the File class when the download method is called. This simulates the behavior of
only fetching/loading the file when it's actually requested. - The proxy pattern provides flexibility. If
in the future, more rules or behaviors need to be added (e.g., file sharing, encryption), they can be
added to the proxy without altering the File class.

This example demonstrates how the Proxy Design Pattern can be effectively used to manage
complexities in a cloud storage system, ensuring modularity and maintainability.

Example 4 - Real life use case of Proxy Design Pattern in


Python
Let's explore a scenario involving a smart home system, where devices can be controlled remotely.
In such systems, there are concerns about security, device state caching, and event logging. The
Proxy Design Pattern can be employed to address these challenges.

217
📌 Scenario: A smart home system where: 1. Devices need to be accessed securely. 2. The state
of devices (e.g., on/off, temperature) is cached to reduce unnecessary communication and save
energy. 3. All device control actions are logged for security and debugging purposes.

class SmartDevice:
def __init__(self, device_name):
self.device_name = device_name
self.state = "off"

def toggle(self):
self.state = "on" if self.state == "off" else "off"
return f"{self.device_name} turned {self.state}"

class SmartDeviceProxy:
def __init__(self, device_name):
self.device_name = device_name
self._device = None
self._state_cache = "off"
self._authenticated = False

def authenticate(self, password):


if password == "SmartHome2023":
self._authenticated = True
return "Authentication successful!"
return "Authentication failed!"

def toggle(self):
if not self._authenticated:
return "Authentication required to control the device."

if not self._device:
self._device = SmartDevice(self.device_name)

self._state_cache = "on" if self._state_cache == "off" else "off"


self._log_action()
return self._device.toggle()

def _log_action(self):
# Simulating logging. In a real-world scenario, this might involve
writing to a database or logging service.
print(f"{self.device_name} was toggled to {self._state_cache} at
{time.ctime()}")

# Usage
proxy = SmartDeviceProxy("LivingRoomLight")
print(proxy.toggle()) # Authentication required to control the device.

proxy.authenticate("WrongPassword") # Authentication failed!


print(proxy.toggle()) # Authentication required to control the device.

proxy.authenticate("SmartHome2023") # Authentication successful!


print(proxy.toggle()) # LivingRoomLight was toggled to on at [current time].
LivingRoomLight turned on.
print(proxy.toggle()) # LivingRoomLight was toggled to off at [current time].
LivingRoomLight turned off.
218
📌 What the code does: 1. The SmartDevice class represents a smart device with a name and
state. It has a toggle method to simulate turning the device on or off. 2. The SmartDeviceProxy
class acts as a proxy for the SmartDevice class. It handles authentication, state caching, and
logging. 3. In the usage section, we demonstrate the proxy's behavior for authentication, state
caching, and logging.

📌 Under the hood: - The proxy pattern allows us to separate concerns. The SmartDevice class
remains simple and focused on representing a device. The complexities of authentication, state
caching, and logging are handled by the proxy. - State caching is implemented by maintaining a
_state_cache variable in the proxy. This simulates the behavior of reducing unnecessary
communication with the actual device if we already know its state. - The proxy pattern provides
flexibility. If in the future, more rules or behaviors need to be added (e.g., device scheduling,
energy-saving modes), they can be added to the proxy without altering the SmartDevice class.

This example illustrates how the Proxy Design Pattern can be effectively used in a smart home
context, ensuring security, efficiency, and maintainability.

Example 5 - Real life use case of Proxy Design Pattern in


Python for Logging and Monitoring:
Before or after forwarding a request to the original object, the proxy can log details about the
request, which can be useful for debugging or monitoring purposes.

Let's delve into a scenario involving an API server that processes requests for user data. In such
systems, it's crucial to monitor and log requests for performance analysis, debugging, and security
audits. The Proxy Design Pattern can be employed to seamlessly integrate this logging and
monitoring functionality.

📌 Scenario: An API server where: 1. User data is fetched based on user IDs. 2. Every request is
logged with its timestamp, user ID, and response time. 3. The system monitors and logs any
suspiciously frequent requests to prevent potential abuse.

import time

class APIServer:
def __init__(self):
# Simulating a small database of user data
self._database = {
"123": "Data for user 123",
"456": "Data for user 456",
"789": "Data for user 789"
}

def fetch_data(self, user_id):


return self._database.get(user_id, "User not found")

class APIServerProxy:
def __init__(self):
self._server = APIServer()
self._request_timestamps = {}

def fetch_data(self, user_id):


start_time = time.time()
219
# Check for suspiciously frequent requests
if user_id in self._request_timestamps and start_time -
self._request_timestamps[user_id] < 1: # Less than 1 second since last request
self._log_request(user_id, "Denied due to suspicious activity")
return "Request denied"

data = self._server.fetch_data(user_id)
end_time = time.time()

self._request_timestamps[user_id] = end_time
self._log_request(user_id, f"Fetched in {end_time - start_time:.4f}
seconds")

return data

def _log_request(self, user_id, message):


# Simulating logging. In a real-world scenario, this might involve
writing to a database, file, or logging service.
print(f"[{time.ctime()}] User ID: {user_id} - {message}")

# Usage
proxy = APIServerProxy()
print(proxy.fetch_data("123")) # [current time] User ID: 123 - Fetched in 0.0001
seconds. Data for user 123.
print(proxy.fetch_data("999")) # [current time] User ID: 999 - Fetched in 0.0001
seconds. User not found.
print(proxy.fetch_data("123")) # [current time] User ID: 123 - Denied due to
suspicious activity. Request denied.

📌 What the code does: 1. The APIServer class simulates a simple API server with a small
database of user data. It has a fetch_data method to retrieve user data based on user IDs. 2. The
APIServerProxy class acts as a proxy for the APIServer class. It handles logging and monitoring
of requests. 3. In the usage section, we demonstrate the proxy's behavior for logging, monitoring,
and detecting suspiciously frequent requests.

📌 Under the hood: - The proxy pattern allows us to separate concerns. The APIServer class
remains simple and focused on serving user data. The complexities of logging and monitoring are
handled by the proxy. - The monitoring is implemented by maintaining a _request_timestamps
dictionary in the proxy. This dictionary tracks the last request time for each user ID, allowing the
system to detect and prevent potential abuse. - The proxy pattern provides flexibility. If in the
future, more rules or behaviors need to be added (e.g., IP-based rate limiting, error logging), they
can be added to the proxy without altering the APIServer class.

This example showcases how the Proxy Design Pattern can be effectively used in an API server
context, ensuring robust logging and monitoring capabilities.

220
Example 6 - Real life use case of Proxy Design Pattern in
Python for Performance Measurement
The proxy can record the time it takes to execute operations, giving insights into performance
bottlenecks.

Let's explore a scenario involving a complex mathematical computation system, such as one used
for scientific simulations or financial modeling. In such systems, understanding the performance
of various computations is crucial for optimization and resource allocation. The Proxy Design
Pattern can be employed to seamlessly integrate performance measurement.

📌 Scenario: A computation system where: 1. Complex mathematical operations are performed.


2. The time taken for each operation is measured and logged. 3. If an operation takes too long, a
warning is issued.

import time
import math

class ComputationEngine:
def heavy_calculation(self, x):
# Simulating a heavy computation
time.sleep(2)
return math.exp(x)

def moderate_calculation(self, x):


# Simulating a moderate computation
time.sleep(1)
return math.sin(x)

class ComputationEngineProxy:
def __init__(self):
self._engine = ComputationEngine()

def heavy_calculation(self, x):


start_time = time.time()
result = self._engine.heavy_calculation(x)
end_time = time.time()

self._log_performance("heavy_calculation", end_time - start_time)


return result

def moderate_calculation(self, x):


start_time = time.time()
result = self._engine.moderate_calculation(x)
end_time = time.time()

self._log_performance("moderate_calculation", end_time - start_time)


return result

def _log_performance(self, operation, duration):


# Simulating logging. In a real-world scenario, this might involve
writing to a database, file, or logging service.
print(f"{operation} took {duration:.2f} seconds to complete.")
if duration > 1.5:

221
print(f"Warning: {operation} is taking longer than expected!")

# Usage
proxy = ComputationEngineProxy()
print(proxy.heavy_calculation(5)) # heavy_calculation took 2.00 seconds to
complete.
print(proxy.moderate_calculation(3)) # moderate_calculation took 1.00 seconds to
complete.

📌 What the code does: 1. The ComputationEngine class simulates a system that performs
complex mathematical operations. It has methods like heavy_calculation and
moderate_calculation to simulate computations of varying intensities. 2. The
ComputationEngineProxy class acts as a proxy for the ComputationEngine class. It measures
and logs the time taken for each computation. 3. In the usage section, we demonstrate the proxy's
behavior for performance measurement and warnings.

📌 Under the hood: - The proxy pattern allows us to separate concerns. The ComputationEngine
class remains focused on performing mathematical operations. The complexities of performance
measurement are handled by the proxy. - The performance measurement is implemented using
Python's built-in time module. The start and end times of each operation are recorded, and the
difference gives the duration. - The proxy pattern provides flexibility. If in the future, more rules or
behaviors need to be added (e.g., memory usage tracking, parallel computation), they can be
added to the proxy without altering the ComputationEngine class.

This example illustrates how the Proxy Design Pattern can be effectively used in a computation-
intensive context, ensuring robust performance measurement capabilities.

🐍🚀 The Singleton Design Pattern in Python 🐍


🚀

222
The singleton pattern offers a way to implement a class from which you can only create one
object, hence the name singleton.

What is interesting is that it is useful when we need to create one and only one object, for
example, to store and maintain a global state for our program. In Python, this pattern can be
implemented using some special built-in features. The singleton pattern restricts the instantiation
of a class to one object, which is useful when you need one object to coordinate actions for the
system. The basic idea is that only one instance of a particular class is created for the needs of the
program. To ensure that this works, we need mechanisms that prevent the instantiation of the
class more than once and also prevent cloning.

📌 The Singleton pattern is often used for logging, driver objects, caching, thread pools, and
database connections. For instance, if you have a configuration manager in a system, you might
want to ensure that there's only one instance of this manager so that you don't end up with
conflicting configurations.

📌 One of the reasons some consider Singleton as an anti-pattern is because it can introduce
global state into an application. Global state is often seen as undesirable because it can make the
system harder to reason about, and it can introduce subtle bugs if not managed carefully.

### Example-1 of Singleton Pattern use-case

Now, let's discuss the implementation details:

📌 In Python, the Singleton pattern can be implemented in several ways due to its dynamic
nature. Some of the common methods include:

- Using a module - Using a class variable - Using decorators - Using metaclasses

For our discussion, let's focus on the metaclass approach, as it's one of the most Pythonic ways
to implement the Singleton pattern.

📌 Metaclasses in Python are a deep topic, but in


essence, they are classes of classes. They allow one
to customize class creation in various ways. By
using metaclasses, we can ensure that a class is
instantiated only once.
Let's see an example WITH and then WITHOUT the "The
Singleton Design Pattern in Python"
📌 Code Without Singleton Pattern
Consider a hypothetical scenario where we have a configuration manager for an application. This
manager is responsible for holding and managing configuration values. Without the Singleton
pattern, the code might look like this:

class ConfigurationManager:
def __init__(self):
self._config_values = {}

223
def set(self, key, value):
self._config_values[key] = value

def get(self, key):


return self._config_values.get(key)

# Usage
config1 = ConfigurationManager()
config1.set("api_key", "123456")

config2 = ConfigurationManager()
print(config2.get("api_key")) # This will print None, not "123456"

📌 Issues with the Above Code


1. Multiple instances of ConfigurationManager can be created.

2. Each instance has its own state, so changes made in one instance are not reflected in others.

3. This can lead to inconsistent states across the application, especially if different parts of the
code are using different instances of the configuration manager.

📌 Refactoring with Singleton Pattern


To ensure that only one instance of ConfigurationManager exists, we can implement the
Singleton pattern. Here's one way to do it using a class variable and overriding the __new__
method:

class ConfigurationManager:
_instance = None

def __new__(cls):
if not cls._instance:
cls._instance = super(ConfigurationManager, cls).__new__(cls)
cls._instance._config_values = {}
return cls._instance

def set(self, key, value):


self._config_values[key] = value

def get(self, key):


return self._config_values.get(key)

# Usage
config1 = ConfigurationManager()
config1.set("api_key", "123456")

config2 = ConfigurationManager()
print(config2.get("api_key")) # This will now print "123456"

📌 How the Singleton Pattern Resolves the Issues


1. The __new__ method ensures that only one instance of ConfigurationManager is created. If
an instance already exists, it returns that instance instead of creating a new one.

2. All parts of the code that create an instance of ConfigurationManager will get the same
instance, ensuring that the state is consistent across the application.
224
3. This prevents the possibility of having different configurations in different parts of the code,
leading to more predictable behavior.

📌 Conclusion
The Singleton pattern is a powerful tool for ensuring that a class has only one instance and
provides a global point of access to that instance. This is especially useful in scenarios like
configuration management, logging, or any other task where it's crucial to maintain a consistent
state across the application.

Let's dive deep into the refactored code and understand how
the Singleton Design Pattern addresses the issues of the
original code.
📌 Singleton Mechanism
In the refactored code, the Singleton pattern is implemented using the __new__ method. The
__new__ method is responsible for creating and returning a new instance of a class. By overriding
this method, we can control the instantiation process of the class.

def __new__(cls):
if not cls._instance:
cls._instance = super(ConfigurationManager, cls).__new__(cls)
cls._instance._config_values = {}
return cls._instance

Here's a breakdown of how this mechanism works:

1. We first check if an instance ( _instance ) of the class ( cls ) already exists.

2. If it doesn't exist, we create a new instance using the super() function and store it in the
_instance class variable.

3. We then initialize the _config_values dictionary for this instance.

4. Finally, we return the _instance , whether it was just created or already existed.

📌 Addressing the Issues


1. Multiple Instances Problem: In the original code, every time we called
ConfigurationManager() , a new instance of the class was created. This led to multiple
instances with different states. With the Singleton pattern, the __new__ method ensures that
only one instance of the class is ever created. All subsequent calls to
ConfigurationManager() will return this single instance.

2. Inconsistent State Across Instances: In the original code, since multiple instances could be
created, setting a value in one instance wouldn't reflect in another. With the Singleton
pattern, since there's only one instance, any change made to this instance is reflected
everywhere. This ensures a consistent state across the application. For example, setting the
"api_key" in config1 and then retrieving it from config2 gives the expected result because
both config1 and config2 are actually references to the same instance.

225
3. Preventing Unintended Behavior: Without the Singleton pattern, different parts of the code
could unintentionally work with different instances of the configuration manager, leading to
unpredictable behavior. With the Singleton pattern, all parts of the code that use
ConfigurationManager are guaranteed to work with the same instance, ensuring
predictable and consistent behavior.

📌 Additional Benefits
1. Memory Efficiency: Since only one instance of the class is created, memory usage is
optimized. This can be especially beneficial in larger applications where many parts of the
code might need access to the configuration manager.

2. Global Access Point: The Singleton instance acts as a global access point, ensuring that all
parts of the application can access the configuration without the need to pass the instance
around.

📌 Conclusion
The Singleton Design Pattern in the refactored code ensures that only one instance of the
ConfigurationManager class exists and provides a global point of access to this instance. This
design choice directly addresses the issues present in the original code, leading to a more
consistent, predictable, and efficient application behavior.

Here's a real-life use-case code:

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class DatabaseConnection(metaclass=SingletonMeta):
def __init__(self, connection_string):
self.connection_string = connection_string
self.connection = self._create_connection()

def _create_connection(self):
# Here, you'd typically establish a connection to the database.
# For the sake of this example, we'll just simulate it.
return f"Connected to {self.connection_string}"

def query(self, sql_query):


# Simulating a query execution
return f"Executing '{sql_query}' on {self.connection_string}"

# Usage
db1 = DatabaseConnection("Server1")
db2 = DatabaseConnection("Server2")

print(db1 is db2) # True


print(db1.query("SELECT * FROM users"))

226
📌 In the above code, SingletonMeta is our metaclass that ensures only one instance of any
class that uses it as a metaclass is created. The DatabaseConnection class is a hypothetical class
representing a connection to a database. We've used the Singleton pattern here to ensure that we
have only one connection to the database, no matter how many times we try to instantiate the
DatabaseConnection class.

📌 The __call__ method in the metaclass is a special method that gets called when an object is
instantiated. Here, we're overriding it to check if an instance of the class already exists. If it does,
we return that instance; otherwise, we create a new one.

📌 The DatabaseConnection class has a method query that simulates executing a SQL query on
the database. When we create two objects, db1 and db2 , with different connection strings and
check if they're the same object using the is operator, it returns True . This confirms that our
Singleton implementation is working as expected.

In summary, the Singleton pattern can be a powerful tool when used judiciously. It's essential to
understand the implications of introducing global state and to weigh the pros and cons in the
context of the specific problem you're trying to solve.

Let's see how exactly the SingletonMeta metaclass


ensures only one instance of any class that uses it as a
metaclass is created.
Simply put, by overriding the __call__ method, the SingletonMeta metaclass ensures that only
one instance of each class is created and stored. Any subsequent attempts to create an instance
of the class will return the existing instance from the _instances dictionary.

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

Now let's go step by step through the SingletonMeta


metaclass
📌 SingletonMeta Definition:

class SingletonMeta(type):
_instances = {}

Here, SingletonMeta inherits from the built-in type class, making it a metaclass. The
_instances dictionary is a class-level attribute that will store instances of classes that use this
metaclass.

📌 The __call__ Method:

def __call__(cls, *args, **kwargs):

227
The __call__ method is a special method in Python. For classes, it's responsible for creating and
returning a new instance of the class. By overriding this method in our metaclass, we can control
the instantiation of classes that use SingletonMeta as their metaclass.

📌 Checking for Existing Instance:

if cls not in cls._instances:

Here, we're checking if the class ( cls ) already has an instance stored in the _instances
dictionary. If it doesn't, it means this is the first time we're trying to create an instance of this class.

📌 Creating a New Instance:

instance = super().__call__(*args, **kwargs)

If the class doesn't have an existing instance, we create a new one. We use the super() function
to call the original __call__ method of the base type class, which will create and return a new
instance of the class.

📌 Storing the New Instance:

cls._instances[cls] = instance

After creating the new instance, we store it in the _instances dictionary using the class ( cls ) as
the key. This ensures that the next time we try to create an instance of this class, we'll find it in the
_instances dictionary and return the existing instance instead of creating a new one.

📌 Returning the Instance:

return cls._instances[cls]

Finally, we return the instance of the class from the _instances dictionary. If this is the first time
we're creating an instance, it'll be the new instance we just created. If an instance already exists,
it'll be the existing instance.

To summarize, the SingletonMeta metaclass uses the _instances dictionary to keep track of
instances of classes that use it as their metaclass. By overriding the __call__ method, it ensures
that only one instance of each class is created and stored. Any subsequent attempts to create an
instance of the class will return the existing instance from the _instances dictionary.

In previous section, I said, " SingletonMeta inherits from


the built-in type class, making it a metaclass." - WHY is that
Let's delve into the nature of classes, objects, and metaclasses in Python.

The mechanism that produces these class objects is called a metaclass. The default metaclass in
Python is type .

A metaclass is a "class of a class" that defines how a class behaves. A metaclass allows you to
define properties or methods that are common to a group of classes.
228
3. type : This is the built-in metaclass in Python. It's responsible for taking the class body and
turning it into a class object. When you use the class keyword in Python, type is working
behind the scenes to create the class. When you define a simple class like MyClass above,
behind the scenes, Python is using type to create the class object.

Here's a simple example:

class MyClass:
pass

# This will print "<class 'type'>", because the type of MyClass is 'type'
print(type(MyClass))

You can also use type directly to create new class objects. The type function can be called with
three arguments: the name of the new class, a tuple containing the base classes (for inheritance),
and a dictionary containing attributes and methods for the class.

For example:

# Using type to create a new class


Foo = type('Foo', (), {'bar': True})

# This creates a new class named 'Foo' with a single attribute 'bar'
instance = Foo()
print(instance.bar) # This will print "True"

📌 Since everything in Python is an object, classes themselves are also objects. They are instances
of a higher-order class called a metaclass . This might sound a bit recursive, but think of
metaclasses as "classes of classes."

📌 The built-in type is the most commonly used metaclass. It's responsible for taking a class
definition and turning it into a class object. When you define a simple class like MyClass above,
behind the scenes, Python is using type to create the class object.

For example, these two pieces of code are equivalent:

1. Using the class statement (the usual way):

class MyClass:
pass

2. Using type directly:

MyClass = type('MyClass', (), {})

In the second example, we're using type directly to create a new class object. The first argument
is the name of the class, the second is a tuple containing base classes (for inheritance), and the
third is a dictionary containing class attributes.

📌 Custom Metaclasses:

229
If type is the default metaclass that creates class objects, then why would we ever need a custom
metaclass like SingletonMeta ? The answer is customization. By creating a custom metaclass, we
can customize class creation, modify class attributes, or, as in our Singleton example, control the
instantiation of the class.

When we say:

class SingletonMeta(type):
pass

We're defining a new metaclass SingletonMeta that inherits from type . By inheriting from
type , SingletonMeta gets all the basic mechanisms to be a metaclass but can also introduce
custom behavior, like ensuring a single instance.

In essence, metaclasses allow us to tap into the class creation process, providing a layer of meta-
programming in Python. They're a powerful tool, but they also introduce complexity, so they
should be used judiciously.

Example-2 of Singleton Pattern use-case a configuration


manager for a large application.
In large applications, you often need to fetch and use configuration settings from various sources
(e.g., environment variables, configuration files, command-line arguments). A Singleton-based
configuration manager can ensure that:

1. Configuration is loaded only once, even if the manager is accessed from various parts of the
application.

2. There's a single source of truth for configuration values.

Here's a hypothetical ConfigManager using the Singleton pattern:

import os
import json

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class ConfigManager(metaclass=SingletonMeta):
def __init__(self, config_path):
self.config_path = config_path
self._config = {}
self._load_config()

def _load_config(self):
# Load configuration from a file
if os.path.exists(self.config_path):
with open(self.config_path, 'r') as file:
self._config = json.load(file)
230
# Override with any environment variables
for key, value in os.environ.items():
self._config[key] = value

def get(self, key, default=None):


return self._config.get(key, default)

def set(self, key, value):


self._config[key] = value
# For this example, we'll also update the config file whenever a value is
set.
# In a real-world scenario, you might handle this differently.
self._save_config()

def _save_config(self):
with open(self.config_path, 'w') as file:
json.dump(self._config, file)

# Usage
config1 = ConfigManager("app_config.json")
config2 = ConfigManager("another_config.json")

print(config1 is config2) # True

# Set and get configuration values


config1.set("database_url", "postgres://localhost:5432/mydb")
print(config1.get("database_url")) # postgres://localhost:5432/mydb
print(config2.get("database_url")) # postgres://localhost:5432/mydb

📌 In this example, ConfigManager is responsible for managing application configuration. It loads


configuration from a JSON file and can also override these settings with environment variables.

📌 The _load_config method reads the configuration from a file and then checks for
environment variables that might override these settings.

📌 The get and set methods allow you to retrieve and update configuration values. For
simplicity, every time a value is set, it's also written back to the configuration file.

📌 Even though we tried to instantiate ConfigManager with two different configuration paths,
both config1 and config2 refer to the same object due to the Singleton pattern.

This example demonstrates how a Singleton can be useful in managing global state, like
configuration, in a consistent and controlled manner across a large application.

Example-3 of Singleton Pattern use-case : Resource Pool


Manager.
In many applications, especially those that deal with databases, network sockets, or threads,
resource pooling is a common strategy to manage and reuse expensive resources. For instance,
establishing a new database connection can be time-consuming. Instead of creating and
destroying a connection every time you need one, you can maintain a pool of connections. When a
part of the application needs a connection, it can borrow one from the pool and return it when
done.

231
A Singleton-based resource pool ensures that:

1. The entire application uses the same pool, preventing over-allocation of resources.

2. Resources are efficiently managed and reused.

Here's a hypothetical DatabaseConnectionPool using the Singleton pattern:

import queue
import sqlite3
from contextlib import contextmanager

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class DatabaseConnectionPool(metaclass=SingletonMeta):
def __init__(self, db_name, max_size=5):
self._db_name = db_name
self._pool = queue.Queue(max_size)
for _ in range(max_size):
self._pool.put(sqlite3.connect(db_name))

@contextmanager
def get_connection(self):
conn = self._pool.get()
try:
yield conn
finally:
self._pool.put(conn)

# Usage
pool1 = DatabaseConnectionPool("my_database.db")
pool2 = DatabaseConnectionPool("another_database.db")

print(pool1 is pool2) # True

# Using a connection from the pool


with pool1.get_connection() as conn:
cursor = conn.cursor()
cursor.execute("SELECT * FROM users")
results = cursor.fetchall()
print(results)

📌 In this example, DatabaseConnectionPool manages a pool of SQLite database connections.


The pool is initialized with a fixed number of connections to a specified database.

📌 The get_connection method is a context manager (thanks to the contextmanager


decorator). This allows you to use the with statement to borrow a connection from the pool and
automatically return it once you're done.

232
📌 Even though we tried to instantiate DatabaseConnectionPool with two different database
names, both pool1 and pool2 refer to the same object due to the Singleton pattern.

This example showcases how a Singleton can be instrumental in managing and reusing expensive
resources across an application, ensuring efficient utilization and consistent behavior.

Example-4 of Singleton Pattern use-case : a System-wide


Event Manager.
In many applications, especially those with a modular architecture, different components might
need to communicate with each other without being tightly coupled. An event-driven architecture
can be a solution. Components can emit (publish) events, and other components can listen
(subscribe) to these events and react accordingly.

A Singleton-based event manager ensures that:

1. The entire application uses the same event manager, facilitating communication between all
modules.

2. Events are dispatched to all interested listeners without any module knowing about the
others.

Here's a hypothetical EventManager using the Singleton pattern:

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class EventManager(metaclass=SingletonMeta):
def __init__(self):
self._listeners = {}

def subscribe(self, event_name, callback):


if event_name not in self._listeners:
self._listeners[event_name] = []
self._listeners[event_name].append(callback)

def emit(self, event_name, *args, **kwargs):


if event_name in self._listeners:
for callback in self._listeners[event_name]:
callback(*args, **kwargs)

# Usage
event_mgr1 = EventManager()
event_mgr2 = EventManager()

print(event_mgr1 is event_mgr2) # True

# Define some listeners

233
def on_user_created(user):
print(f"User {user['name']} was created with ID {user['id']}!")

def notify_admin(user):
print(f"Admin notified about the creation of user {user['name']}.")

# Subscribe listeners to an event


event_mgr1.subscribe("user_created", on_user_created)
event_mgr1.subscribe("user_created", notify_admin)

# Emitting the event from another part of the application


new_user = {"id": 1, "name": "Alice"}
event_mgr2.emit("user_created", new_user)

📌 In this example, EventManager manages a list of listeners (callbacks) for various events.

📌 The subscribe method allows different parts of the application to express interest in specific
events by registering callback functions.

📌 The emit method allows any part of the application to broadcast (emit) an event. When this
happens, all registered listeners for that event get called.

📌 Even though we tried to instantiate EventManager twice, both event_mgr1 and event_mgr2
refer to the same object due to the Singleton pattern.

This example illustrates how a Singleton can be pivotal in creating a decoupled, event-driven
architecture, allowing different parts of an application to interact seamlessly without direct
dependencies.

Example-5 of Singleton Pattern use-case : a centralized


logging system
Logging is one of the most common use cases for the Singleton pattern. Let's consider a scenario
where you want a centralized logging system that:

1. Writes logs to both the console and a file.

2. Can be accessed from any part of your application to log messages.

3. Maintains a consistent log format and behavior throughout the application.

Here's a hypothetical CentralizedLogger using the Singleton pattern:

import logging

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class CentralizedLogger(metaclass=SingletonMeta):
def __init__(self, log_file="app.log"):
234
self._logger = logging.getLogger("CentralizedLogger")
self._logger.setLevel(logging.DEBUG) # Log all levels

# Create console handler and set level to debug


ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)

# Create file handler and set level to debug


fh = logging.FileHandler(log_file)
fh.setLevel(logging.DEBUG)

# Create formatter
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %
(message)s')

# Add formatter to ch and fh


ch.setFormatter(formatter)
fh.setFormatter(formatter)

# Add ch and fh to logger


self._logger.addHandler(ch)
self._logger.addHandler(fh)

def log(self, message, level=logging.INFO):


if level == logging.DEBUG:
self._logger.debug(message)
elif level == logging.INFO:
self._logger.info(message)
elif level == logging.WARNING:
self._logger.warning(message)
elif level == logging.ERROR:
self._logger.error(message)
elif level == logging.CRITICAL:
self._logger.critical(message)

# Usage
logger1 = CentralizedLogger()
logger2 = CentralizedLogger()

print(logger1 is logger2) # True

logger1.log("This is an info message.")


logger2.log("This is a warning message.", level=logging.WARNING)

📌 In this example, CentralizedLogger sets up a logger with two handlers: one for the console
and one for a file. The log format, log file, and other configurations are centralized in this class.

📌 The log method provides a simplified interface to log messages at different levels. It internally
uses the appropriate logging method based on the provided log level.

📌 Even though we tried to instantiate CentralizedLogger twice, both logger1 and logger2
refer to the same object due to the Singleton pattern.

This example demonstrates how a Singleton can be instrumental in maintaining a consistent


logging mechanism across an application, ensuring that logs from different parts of the
application are handled uniformly.
235
The Singleton pattern, when applied to logging as in the provided example, addresses several
challenges and offers benefits:

📌 Consistent Configuration Across the Application: In larger applications, different modules


or components might attempt to configure logging separately. This can lead to inconsistent log
formats, levels, or destinations. By using a Singleton logger, you ensure that the entire application
has a single, consistent logging configuration.

📌 Avoiding Multiple Handlers: Without a Singleton logger, different parts of an application


might inadvertently add multiple handlers to the logger. This could result in duplicate log
messages. For instance, if two parts of the application both add a console handler, you might see
each log message printed twice. The Singleton pattern ensures that the logger setup, including
handler configuration, is done only once.

📌 Centralized Log Management: If different components of your application initialize their own
loggers, managing where logs are written, rotating log files, or changing log levels can become
cumbersome. A Singleton logger centralizes these concerns, making management easier.

📌 Performance: Establishing handlers, especially file handlers, can be a relatively slow


operation. If different parts of an application repeatedly set up and tear down handlers, it can
introduce unnecessary overhead. With a Singleton logger, this setup is done once, potentially
improving performance.

📌 Stateful Logging Operations: In some advanced scenarios, you might want your logger to
maintain state, such as counting the number of error messages logged. A Singleton logger ensures
that this state is maintained consistently across the application.

📌 Ease of Modification: If you decide to change the logging behavior, format, or add additional
handlers (e.g., sending critical errors to an alerting system), you only need to make changes in one
place. This centralized approach simplifies maintenance and ensures changes are consistently
applied.

📌 Resource Management: Logging resources, especially file handlers, consume system


resources. By ensuring there's only one instance of the logger, you can better manage these
resources, ensuring files are written to and closed properly.

In summary, the Singleton pattern, when applied to logging, provides a solution to the challenges
of consistency, resource management, performance, and maintainability. It ensures that the entire
application uses a unified logging approach, leading to cleaner, more predictable, and easier-to-
manage log output.

Example-6 of Singleton Pattern use-case :


ThreadPoolManager
Thread pools are a great example of where the Singleton pattern can be beneficial. Thread pools
manage a set of threads that can be reused to execute tasks, which can significantly improve
performance for applications that perform many small tasks concurrently.

Here's a hypothetical ThreadPoolManager using the Singleton pattern:

import threading
import queue

236
class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class ThreadPoolManager(metaclass=SingletonMeta):
def __init__(self, num_threads):
self.tasks = queue.Queue()
for _ in range(num_threads):
thread = threading.Thread(target=self._worker)
thread.daemon = True # to let main thread exit even if workers are
blocking
thread.start()

def _worker(self):
while True:
func, args, kwargs = self.tasks.get()
try:
func(*args, **kwargs)
except Exception as e:
print(f"Thread error: {e}")
finally:
self.tasks.task_done()

def submit(self, func, *args, **kwargs):


self.tasks.put((func, args, kwargs))

# Usage
pool1 = ThreadPoolManager(5)
pool2 = ThreadPoolManager(10)

print(pool1 is pool2) # True

# Define some tasks


def print_numbers(start, end):
for i in range(start, end):
print(i)

def greet(name):
print(f"Hello, {name}!")

# Submit tasks to the thread pool


pool1.submit(print_numbers, 1, 5)
pool2.submit(greet, "Alice")

📌 In this example, ThreadPoolManager manages a pool of worker threads. When initialized, it


starts a specified number of threads, each running the _worker method.

📌 The _worker method continuously fetches tasks from the tasks queue and executes them. If
there's an exception while executing a task, it's caught and printed.

237
📌 The submit method allows you to submit new tasks to the thread pool. These tasks are added
to the tasks queue and will be picked up by one of the worker threads.

📌 Even though we tried to instantiate ThreadPoolManager with different numbers of threads,


both pool1 and pool2 refer to the same object due to the Singleton pattern.

This example demonstrates how a Singleton can be instrumental in managing a shared resource,
like a thread pool, across an application. It ensures that threads are efficiently utilized, and tasks
are executed concurrently without the overhead of constantly creating and destroying threads.

Example-7 of Singleton Pattern use-case : Caching


Caching is a powerful mechanism to enhance the performance of applications by storing and
reusing previously fetched or computed results. A Singleton-based cache ensures that:

1. Cached data is consistent and shared across the entire application.

2. Memory or resources are not wasted by having multiple caches.

3. Cache policies (like eviction strategies) are uniformly applied.

Let's consider a scenario where you want a centralized cache system that:

Caches results of function calls.

Uses a simple eviction strategy to ensure the cache doesn't grow indefinitely.

Here's a hypothetical FunctionCache using the Singleton pattern:

from functools import wraps

class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class FunctionCache(metaclass=SingletonMeta):
def __init__(self, max_size=100):
self.cache = {}
self.max_size = max_size

def _evict(self):
# Simple eviction strategy: remove a random item
# In a real-world scenario, you'd likely use a more sophisticated
strategy
self.cache.pop(next(iter(self.cache)))

def memoize(self, func):


@wraps(func)
def wrapper(*args, **kwargs):
key = (func, args, frozenset(kwargs.items()))
if key in self.cache:
return self.cache[key]
238
result = func(*args, **kwargs)
self.cache[key] = result
if len(self.cache) > self.max_size:
self._evict()
return result
return wrapper

# Usage
cache1 = FunctionCache(max_size=50)
cache2 = FunctionCache(max_size=200)

print(cache1 is cache2) # True

# Define some functions and apply caching


@cache1.memoize
def expensive_operation(x, y):
return x * y # Imagine this operation is computationally expensive

@cache2.memoize
def another_expensive_operation(x):
return x ** 2 # Another expensive operation

print(expensive_operation(5, 6)) # Computed and cached


print(expensive_operation(5, 6)) # Retrieved from cache
print(another_expensive_operation(10)) # Computed and cached

📌 In this example, FunctionCache provides a memoize decorator that caches the results of
function calls. The cache key is derived from the function, its arguments, and keyword arguments.

📌 The _evict method provides a simple eviction strategy. When the cache exceeds its
max_size , it removes a random item. In a real-world scenario, you might implement a Least
Recently Used (LRU) or another eviction strategy.

📌 Even though we tried to instantiate FunctionCache with different max sizes, both cache1 and
cache2 refer to the same object due to the Singleton pattern.

This example showcases how a Singleton can be instrumental in providing a centralized caching
mechanism, ensuring consistent cache behavior and efficient resource utilization across an
application.

Example-8 of Singleton Pattern use-case : Driver Objects


Driver objects, especially in the context of web automation or database connections, are often
expensive to initialize and manage. Let's consider a scenario involving web automation using
Selenium.

Imagine you're building an application that automates various tasks on a website. Initializing a
new browser driver for every task can be time-consuming and resource-intensive. Instead, you can
use a Singleton pattern to ensure that the entire application uses a single instance of the browser
driver.

Here's a hypothetical BrowserDriver using the Singleton pattern:

from selenium import webdriver


239
class SingletonMeta(type):
_instances = {}

def __call__(cls, *args, **kwargs):


if cls not in cls._instances:
instance = super().__call__(*args, **kwargs)
cls._instances[cls] = instance
return cls._instances[cls]

class BrowserDriver(metaclass=SingletonMeta):
def __init__(self):
self.driver = webdriver.Chrome() # Initialize the Chrome browser driver

def get_page(self, url):


self.driver.get(url)
return self.driver.page_source

def close(self):
self.driver.quit()

# Usage
driver1 = BrowserDriver()
driver2 = BrowserDriver()

print(driver1 is driver2) # True

# Use the driver to navigate and fetch page content


page_content = driver1.get_page("https://www.example.com")
print(len(page_content))

# Close the browser when done


driver2.close()

📌 In this example, BrowserDriver wraps around the Selenium's Chrome browser driver. It
provides a method get_page to navigate to a URL and fetch the page's content.

📌 Even though we tried to instantiate BrowserDriver twice, both driver1 and driver2 refer
to the same object due to the Singleton pattern.

📌 The close method ensures that the browser is closed properly when the application is done
using it.

This example demonstrates how a Singleton can be instrumental in managing a shared resource,
like a browser driver, across an application. It ensures efficient utilization of resources and
consistent behavior, as all parts of the application interact with the same browser instance.

240
🐍🚀 State Design Pattern in Python 🐍🚀

📌 The State Design Pattern is a behavioral design pattern that allows an object to change its
behavior when its internal state changes. This pattern involves encapsulating varying behavior for
the same routine in different state classes. The primary objective is to make a system more
maintainable and organized by separating concerns.

It is a behavioral design pattern. You can use the state pattern to implement state-specific
behavior in which objects can change their functionality at runtime. You can use it to avoid using
conditional statements when changing an object's behavior based on its state. In the state pattern,
you should encapsulate) different states in separate State classes. The original class keeps a
reference to a state object based on its current state rather than using conditional statements to
implement state-dependent functionality.

1) Context - it is the original class of our application. It maintains a reference to one of the
concrete states on which its behavior depends. It also has a method to modify the internal state.
241
2) State interface - All supported states share the same state interface. Only the state interface
allows Context to communicate with state objects. Context can only communicate with state
objects via the state interface.

3) Concrete states - For each state, these objects implement the 'State' interface. These are the
main objects which contain the state-specific methods.

📌 Use Cases: - Text editors can have states like "Insert Mode" or "Command Mode", each with
different behavior for the same keypress. - A TCP connection can have states like "Established",
"Listen", "Closed", and actions on the connection (like send or close ) behave differently
depending on the state.

📌 The State Design Pattern can be seen as a strategy pattern but for situations where the
strategy can change dynamically during the lifetime of the object.

📌 Let's consider a real-world scenario: an online video


player.
The player can be in various states such as Playing , Paused , Buffering , and Stopped .
Depending on its state, the player's behavior for actions like play , pause , stop , and buffer will
differ.

1. Code Without State Design Pattern:


And later we will see, how exactly the Code With State Design Pattern solved the above issues

class VideoPlayer:
def __init__(self):
self.state = "Stopped"

def play(self):
if self.state == "Stopped":
print("Starting video from the beginning.")
self.state = "Playing"
elif self.state == "Playing":
print("Video is already playing.")
elif self.state == "Paused":
print("Resuming video.")
self.state = "Playing"
elif self.state == "Buffering":
print("Wait until buffering completes.")

def pause(self):
if self.state == "Playing":
print("Pausing video.")
self.state = "Paused"
elif self.state == "Paused":
print("Video is already paused.")
elif self.state == "Stopped":
print("Video is stopped. Play it first.")
elif self.state == "Buffering":
print("Wait until buffering completes.")

242
def stop(self):
if self.state in ["Playing", "Paused", "Buffering"]:
print("Stopping video.")
self.state = "Stopped"
else:
print("Video is already stopped.")

def buffer(self):
if self.state == "Playing":
print("Buffering video. Please wait.")
self.state = "Buffering"
else:
print("Buffering only happens during playback.")

📌 Issues with the above code:


The VideoPlayer class is cluttered with conditional statements, making it hard to read and
maintain.

Adding a new state or modifying an existing state's behavior requires changes to multiple
methods, violating the Open/Closed Principle.

The code isn't modular, and the behaviors for each state are intertwined, making it prone to
errors.

2. Code With State Design Pattern:

class PlayerState:
def play(self):
pass

def pause(self):
pass

def stop(self):
pass

def buffer(self):
pass

class PlayingState(PlayerState):
def play(self):
print("Video is already playing.")

def pause(self):
print("Pausing video.")

def buffer(self):
print("Buffering video. Please wait.")

class PausedState(PlayerState):
def play(self):
print("Resuming video.")

def pause(self):
243
print("Video is already paused.")

class StoppedState(PlayerState):
def play(self):
print("Starting video from the beginning.")

def stop(self):
print("Video is already stopped.")

class BufferingState(PlayerState):
def play(self):
print("Wait until buffering completes.")

def pause(self):
print("Wait until buffering completes.")

class VideoPlayer:
def __init__(self):
self.state = StoppedState()

def set_state(self, state):


self.state = state

def play(self):
self.state.play()

def pause(self):
self.state.pause()

def stop(self):
self.state.stop()

def buffer(self):
self.state.buffer()

📌 Advantages of the State Design Pattern:


The code is more organized, with each state's behavior encapsulated in its respective class.

The VideoPlayer class is cleaner and delegates state-specific behaviors to the state objects.

Adding a new state or modifying an existing one is easier and doesn't require changes to the
VideoPlayer class, adhering to the Open/Closed Principle.

The design is modular, making it less error-prone and more maintainable.

In conclusion, the State Design Pattern offers a structured approach to handle objects that have
different behaviors based on their internal states. By encapsulating each state's behavior in
separate classes and delegating state-specific behaviors to these classes, the design becomes
more modular, maintainable, and adheres to good software design principles.

244
Let's see how exactly the Code With State Design Pattern
solved the above issues
📌 Issue 1: Cluttered Code with Conditional Statements: In the code without the State Design
Pattern, the VideoPlayer class was filled with conditional statements to check the current state
before deciding on the behavior. This made the code hard to read and maintain.

Solution with State Design Pattern: The State Design Pattern encapsulates each state's behavior
in its own class. This means that the behavior for each state is defined within that state's class,
eliminating the need for conditional checks in the main VideoPlayer class. The VideoPlayer
simply delegates the action to the current state object, which inherently knows its behavior. This
results in a cleaner and more organized code structure.

📌 Issue 2: Violation of the Open/Closed Principle: In the original code, introducing a new state
or modifying an existing state's behavior required changes to multiple methods within the
VideoPlayer class.

Solution with State Design Pattern: With the State Design Pattern, each state's behavior is
defined within its own class. If a new state needs to be introduced, a new class for that state is
created without modifying the existing classes. Similarly, if the behavior of an existing state needs
to be changed, only the specific state class needs to be modified. The VideoPlayer class remains
untouched, adhering to the Open/Closed Principle.

📌 Issue 3: Lack of Modularity: The original code intertwined the behaviors of all states, making
it prone to errors and harder to debug or extend.

Solution with State Design Pattern: The State Design Pattern promotes modularity by
separating the behavior of each state into its own class. This separation ensures that the logic for
each state is isolated from the others, reducing the risk of errors when modifying one state's
behavior. It also makes the system more maintainable, as developers can focus on individual state
classes without affecting the others.

📌 Under-the-hood Theory: The State Design Pattern leverages the power of polymorphism. By
having each state class implement a common interface (or inherit from a common base class), the
main context class ( VideoPlayer in our case) can interact with any state object interchangeably.
This dynamic dispatch capability, where the method that gets executed is determined at runtime
based on the object's class, is a cornerstone of object-oriented programming and is efficiently
handled by the Python interpreter.

In summary, the State Design Pattern addresses the issues of the original code by providing a
structured and modular approach. It encapsulates each state's behavior in separate classes,
promotes the Open/Closed Principle, and leverages polymorphism to delegate state-specific
behaviors, resulting in a more maintainable and less error-prone design.

📌 Example - 1 : More Use-case Code:


Let's consider a simple music player. The player can be in one of three states: Playing , Paused ,
or Stopped . Each state has different behaviors for the play , pause , and stop commands.

class State:
def play(self):
pass

245
def pause(self):
pass

def stop(self):
pass

class PlayingState(State):
def play(self):
print("Already playing. No action taken.")

def pause(self):
print("Pausing music.")

def stop(self):
print("Stopping music.")

class PausedState(State):
def play(self):
print("Resuming music.")

def pause(self):
print("Already paused. No action taken.")

def stop(self):
print("Stopping music.")

class StoppedState(State):
def play(self):
print("Starting music from the beginning.")

def pause(self):
print("Can't pause. Music is already stopped.")

def stop(self):
print("Already stopped. No action taken.")

class MusicPlayer:
def __init__(self):
self.state = StoppedState()

def play(self):
self.state.play()

def pause(self):
self.state.pause()

def stop(self):
self.state.stop()

def set_state(self, state):


self.state = state

📌 Description of the code:


We have an abstract State class that defines the methods play , pause , and stop .
246
Three concrete state classes ( PlayingState , PausedState , and StoppedState ) inherit from
the State class and provide their own implementations for the methods.

The MusicPlayer class has an attribute state that holds its current state. It delegates the
play , pause , and stop commands to the current state object.

The set_state method allows the MusicPlayer to change its current state.

📌 The beauty of this design is that if we want to add a new state or change the behavior of an
existing state, we can do so without modifying the MusicPlayer class. This adheres to the
Open/Closed Principle, which states that software entities should be open for extension but closed
for modification.

📌 Under-the-hood: When you call a method on the MusicPlayer object, it delegates the call to
the corresponding method of the current state object. This is a form of runtime polymorphism.
The actual method that gets executed depends on the type (class) of the current state object. This
dynamic dispatch is achieved through Python's dynamic typing and method overriding
capabilities.

Let's see how the above code example adheres to the


principles and requirements of the State Design Pattern in
Python
📌 Encapsulation of States: The State Design Pattern requires that each state be encapsulated in
its own class. In the provided code, this is evident with the three distinct state classes:
PlayingState , PausedState , and StoppedState . Each of these classes encapsulates the
behavior specific to that state.

📌 Context Class: The pattern requires a context class that maintains an instance of a state
subclass to define its current state. In our example, the MusicPlayer class serves as the context.
It has an attribute state that holds its current state, which is an instance of one of the state
subclasses.

📌 Delegation: The State Design Pattern dictates that the context class should delegate state-
specific requests to the current state object. In the MusicPlayer class, methods like play() ,
pause() , and stop() don't directly implement the behavior. Instead, they delegate these calls to
the corresponding methods of the current state object ( self.state.play() ,
self.state.pause() , and self.state.stop() ).

📌 State Transitions: While the provided code doesn't automatically transition between states, it
provides a mechanism to do so with the set_state method in the MusicPlayer class. This
method allows the context ( MusicPlayer ) to change its current state, which is a fundamental
aspect of the State Design Pattern.

📌 Flexibility and Open/Closed Principle: The State Design Pattern promotes flexibility. If we
need to introduce a new state or modify an existing one, we can do so without altering the context
class ( MusicPlayer ). This is in line with the Open/Closed Principle, which suggests that classes
should be open for extension but closed for modification. In our code, adding a new state would
involve creating a new state class without needing to modify the MusicPlayer class.

247
📌 Consistency in Interface: All state classes in the State Design Pattern should have a
consistent interface so that they can be interchangeably used by the context class. In our example,
all state classes ( PlayingState , PausedState , StoppedState ) inherit from the abstract State
class, ensuring they all have the methods play() , pause() , and stop() . This consistency allows
the MusicPlayer class to delegate calls without worrying about the method's availability in the
current state object.

In conclusion, the provided code example adheres to the principles and requirements of the State
Design Pattern by encapsulating each state's behavior in separate classes, maintaining a context
class that holds the current state, delegating state-specific behaviors to the current state object,
providing a mechanism for state transitions, ensuring flexibility, and maintaining a consistent
interface across all state classes.

Example - 2 : State Design Pattern in Python


Certainly, let's consider a more complex example: a document editor that supports different user
roles and their respective permissions. Each role (state) will have different behaviors for actions
like edit , view , and delete .

📌 State Classes (User Roles):

class UserRole:
def view(self):
pass

def edit(self):
pass

def delete(self):
pass

class AdminRole(UserRole):
def view(self):
print("Admin: Viewing document.")

def edit(self):
print("Admin: Editing document.")

def delete(self):
print("Admin: Deleting document.")

class EditorRole(UserRole):
def view(self):
print("Editor: Viewing document.")

def edit(self):
print("Editor: Editing document.")

def delete(self):
print("Editor: Sorry, you cannot delete the document.")

class ViewerRole(UserRole):
def view(self):
print("Viewer: Viewing document.")

248
def edit(self):
print("Viewer: Sorry, you cannot edit the document.")

def delete(self):
print("Viewer: Sorry, you cannot delete the document.")

📌 Context Class (Document Editor):

class DocumentEditor:
def __init__(self):
self.role = ViewerRole() # default role

def set_role(self, role):


self.role = role

def view_document(self):
self.role.view()

def edit_document(self):
self.role.edit()

def delete_document(self):
self.role.delete()

📌 Usage:

editor = DocumentEditor()

editor.view_document() # Viewer: Viewing document.


editor.edit_document() # Viewer: Sorry, you cannot edit the document.

editor.set_role(EditorRole())
editor.edit_document() # Editor: Editing document.
editor.delete_document() # Editor: Sorry, you cannot delete the document.

editor.set_role(AdminRole())
editor.delete_document() # Admin: Deleting document.

📌 Description:
We have an abstract UserRole class that defines methods view , edit , and delete .

Three concrete role classes ( AdminRole , EditorRole , and ViewerRole ) inherit from
UserRole and provide their own implementations for the methods based on the
permissions associated with each role.

The DocumentEditor class (context) has an attribute role that holds its current user role. It
delegates the view_document , edit_document , and delete_document commands to the
current role object.

The set_role method allows the DocumentEditor to change its current role.

📌 Complexity:

249
This example is more complex than the previous one because it models a real-world scenario
where different user roles have varying permissions in a document editor. The State Design
Pattern elegantly handles the varying behaviors without cluttering the main DocumentEditor
class with conditional statements. Instead, each role's behavior is encapsulated in its respective
class, making the system more modular and maintainable.

📌 Encapsulation of States: In the provided code, each user role (state) is encapsulated within its
own class: AdminRole , EditorRole , and ViewerRole . This encapsulation ensures that the
behavior specific to each role is contained within its respective class, adhering to the principle of
the State Design Pattern that requires each state to be represented as a separate entity.

📌 Context Class: The DocumentEditor class serves as the context in this example. It maintains
an instance of a user role (state subclass) in its role attribute, which defines its current state. The
State Design Pattern mandates the presence of such a context class that holds a reference to one
of the state objects to define its current state.

📌 Delegation: The State Design Pattern emphasizes that the context class should delegate state-
specific requests to the associated state object. This principle is evident in the DocumentEditor
class. When methods like view_document() , edit_document() , and delete_document() are
invoked, the DocumentEditor doesn't directly execute the behavior. Instead, it delegates these
requests to the corresponding methods ( view() , edit() , and delete() ) of the current role
object ( self.role ).

📌 State Transitions: The set_role method in the DocumentEditor class facilitates transitions
between states. By invoking this method, the context ( DocumentEditor ) can change its current
role (state). This mechanism is central to the State Design Pattern, allowing objects to dynamically
alter their behavior by transitioning between states.

📌 Flexibility and Open/Closed Principle: The design of the provided code promotes
adaptability. If a new user role needs to be introduced or if the behavior of an existing role needs
modification, it can be achieved without altering the DocumentEditor class. This approach aligns
with the Open/Closed Principle, suggesting that entities should be open for extension but closed
for modification. For instance, if a new ContributorRole needs to be added, one would simply
create a new state class without modifying the existing DocumentEditor class.

📌 Consistency in Interface: For the State Design Pattern to function seamlessly, all state classes
should present a consistent interface. This ensures that they can be interchangeably used by the
context class. In the provided code, all role classes ( AdminRole , EditorRole , ViewerRole ) inherit
from the abstract UserRole class. This inheritance guarantees that they all possess the methods
view() , edit() , and delete() . Such consistency allows the DocumentEditor to delegate
method calls without concerns about the method's presence in the current role object.

In summary, the given code example adheres to the principles and requirements of the State
Design Pattern by encapsulating the behavior of each state within separate classes, maintaining a
context class that holds the current state, delegating state-specific behaviors to the state object,
offering a mechanism for state transitions, ensuring system flexibility, and preserving a consistent
interface across all state classes.

250
🐍🚀 Strategy Pattern in Python 🐍🚀

Let's dive deep into the Strategy Pattern in Python.

📌 Strategy Pattern: The Strategy Pattern is a behavioral design pattern that defines a family of
algorithms, encapsulates each one, and makes them interchangeable. It lets the algorithm vary
independently from the clients that use it. In simpler terms, it allows you to switch between
different methods or strategies at runtime without altering the code that uses these methods.

📌 Why use the Strategy Pattern?: 1. It promotes the Open/Closed Principle, which states that
classes should be open for extension but closed for modification. This means you can introduce
new strategies without changing the existing code. 2. It helps to avoid large conditional statements
or switch cases when deciding which algorithm to use. 3. It provides a clear separation between
the classes that use a strategy and the strategies themselves.

📌 Use Cases: 1. Sorting algorithms: Depending on the type and size of data, you might want to
switch between quicksort, mergesort, or bubblesort. 2. Payment methods: In an e-commerce
application, you might have multiple payment methods like credit card, PayPal, or bank transfer. 3.
Compression algorithms: Depending on the requirements, you might want to switch between
different compression methods like ZIP, RAR, or TAR.

Let's see an example WITH and then WITHOUT the "Strategy


Pattern in Python"
Initial Code Without Strategy Pattern

Consider an e-commerce application that calculates shipping costs based on different shipping
methods. Here's a naive implementation:

class Order:
def __init__(self, total, shipping_method):
self.total = total
self.shipping_method = shipping_method

def calculate_shipping_cost(self):
if self.shipping_method == "standard":
251
return self.total * 0.05
elif self.shipping_method == "express":
return self.total * 0.10
elif self.shipping_method == "overnight":
return self.total * 0.20
else:
raise ValueError("Invalid shipping method")

order = Order(100, "express")


print(order.calculate_shipping_cost()) # Outputs: 10.0

📌 The above code has a few issues:


📌 Tight Coupling: The Order class is tightly coupled with the shipping calculation logic. If we
want to add a new shipping method, we have to modify the Order class.

📌 Violation of Open/Closed Principle: The code is not open for extension (adding new shipping
methods) without modifying the existing code.

📌 Lack of Reusability: The shipping calculation logic is embedded within the Order class and
cannot be reused elsewhere.

Refactored Code With Strategy Pattern

To implement the Strategy Pattern, we'll define a family of algorithms (shipping methods) and
encapsulate each one. Then, we'll make them interchangeable within the Order class.

from abc import ABC, abstractmethod

# Define the ShippingStrategy interface


class ShippingStrategy(ABC):
@abstractmethod
def calculate(self, total):
pass

# Implement concrete strategies


class StandardShipping(ShippingStrategy):
def calculate(self, total):
return total * 0.05

class ExpressShipping(ShippingStrategy):
def calculate(self, total):
return total * 0.10

class OvernightShipping(ShippingStrategy):
def calculate(self, total):
return total * 0.20

# Refactor the Order class to use the Strategy Pattern


class Order:
def __init__(self, total, shipping_strategy: ShippingStrategy):
self.total = total
self.shipping_strategy = shipping_strategy

def calculate_shipping_cost(self):
252
return self.shipping_strategy.calculate(self.total)

# Client code
order1 = Order(100, ExpressShipping())
print(order1.calculate_shipping_cost()) # Outputs: 10.0

order2 = Order(100, StandardShipping())


print(order2.calculate_shipping_cost()) # Outputs: 5.0

📌 Benefits of the Refactored Code:


📌 Separation of Concerns: The Order class is now only concerned with order-related logic.
Shipping calculation logic is separated into individual strategy classes.

📌 Easily Extensible: To add a new shipping method, simply create a new class that implements
the ShippingStrategy interface. No need to modify the existing Order class.

📌 Increased Reusability: The shipping calculation logic can now be reused in other parts of the
application if needed.

📌 Flexibility: The client can now easily switch between different shipping methods at runtime
without altering the code that uses these methods.

I hope this provides a clear understanding of the Strategy Pattern in Python and its benefits.

Let's delve deeper into how the refactored code using the
Strategy Pattern addresses the issues present in the original
code.
📌 Issue: Tight Coupling
In the original code, the Order class was responsible for both managing order details and
calculating shipping costs based on the shipping method. This means that the Order class was
tightly coupled with the shipping calculation logic.

Solution with Strategy Pattern: The refactored code decouples the Order class from the
shipping calculation logic by introducing a family of algorithms (shipping strategies) encapsulated
within their own classes ( StandardShipping , ExpressShipping , OvernightShipping ). The
Order class now only needs to interact with the ShippingStrategy interface, making it loosely
coupled. This separation ensures that changes to one part (e.g., adding a new shipping method)
don't necessitate changes to the other parts.

📌 Issue: Violation of Open/Closed Principle


The original code was not open for extension without modification. Every time a new shipping
method was introduced, the Order class needed to be modified, violating the Open/Closed
Principle.

Solution with Strategy Pattern: The Strategy Pattern promotes the Open/Closed Principle. The
refactored code is open for extension (adding new shipping methods) without needing to modify
existing classes. If a new shipping method needs to be added, one can simply create a new class
implementing the ShippingStrategy interface. The existing Order class remains untouched,
253
ensuring that existing functionality is not jeopardized by new extensions.

📌 Issue: Lack of Reusability


In the original code, the shipping calculation logic was embedded within the Order class, making
it hard to reuse this logic elsewhere in the application without duplicating code.

Solution with Strategy Pattern: By encapsulating the shipping calculation logic within separate
strategy classes, the refactored code promotes reusability. Each shipping strategy
( StandardShipping , ExpressShipping , OvernightShipping ) can now be reused in other parts
of the application if needed. For instance, if there's a need to provide a shipping cost estimator
tool elsewhere in the application, these strategy classes can be leveraged without duplicating the
calculation logic.

📌 Flexibility in Switching Strategies


While not a direct issue in the original code, the Strategy Pattern inherently provides the flexibility
to switch between strategies at runtime.

Solution with Strategy Pattern: In the refactored code, the client can instantiate an Order
object with any shipping strategy without altering the code that uses these methods. This dynamic
behavior is a hallmark of the Strategy Pattern, allowing for easy interchangeability of algorithms or
methods at runtime.

In summary, the Strategy Pattern in the refactored code effectively addresses the issues of tight
coupling, violation of the Open/Closed Principle, and lack of reusability present in the original
code. It also introduces the added benefit of flexibility in switching between strategies.

📌 Real-life Use-case Code:


Let's consider an e-commerce application where users can choose different payment methods.

from abc import ABC, abstractmethod

# Define the PaymentStrategy interface


class PaymentStrategy(ABC):

@abstractmethod
def pay(self, amount: float) -> str:
pass

# Implement concrete strategies


class CreditCardPayment(PaymentStrategy):
def pay(self, amount: float) -> str:
return f"Paying ${amount} using Credit Card."

class PayPalPayment(PaymentStrategy):
def pay(self, amount: float) -> str:
return f"Paying ${amount} using PayPal."

class BankTransferPayment(PaymentStrategy):
def pay(self, amount: float) -> str:
254
return f"Paying ${amount} using Bank Transfer."

# Context class that uses a strategy


class ShoppingCart:
def __init__(self, payment_strategy: PaymentStrategy):
self._payment_strategy = payment_strategy

def checkout(self, amount: float) -> None:


print(self._payment_strategy.pay(amount))

# Client code
cart = ShoppingCart(CreditCardPayment())
cart.checkout(100.0)

cart = ShoppingCart(PayPalPayment())
cart.checkout(100.0)

📌 Explanation of the Code: 1. We start by defining an interface PaymentStrategy with an


abstract method pay . This interface will be implemented by all concrete payment strategies. 2.
We then define three concrete strategies: CreditCardPayment , PayPalPayment , and
BankTransferPayment . Each of these implements the pay method in its own way. 3. The
ShoppingCart class, which acts as the context, uses a payment strategy to process the payment.
It doesn't need to know the specifics of the payment method; it just calls the pay method. 4. In
the client code, we can easily switch between payment methods by changing the strategy passed
to the ShoppingCart .

📌 Under-the-hood: When you use the Strategy Pattern, you're essentially leveraging
polymorphism. The context class ( ShoppingCart in our example) doesn't know the specifics of the
concrete strategy it's using. It only knows about the strategy interface. This decoupling is what
allows us to switch strategies on-the-fly. The actual method that gets called is determined at
runtime based on the object's type, a concept known as dynamic dispatch.

📌 Benefits: 1. The Strategy Pattern provides a clear separation of concerns. Each strategy is in its
own class, making it easy to add, remove, or modify strategies without affecting other parts of the
code. 2. It promotes code reusability. The same strategy can be used in different parts of the
application or even in different applications. 3. It simplifies unit testing. Each strategy can be
tested independently of the context class.

I hope this gives you a comprehensive understanding of the Strategy Pattern in Python!

Note on the @abstractmethod decorator - It indicates that


a method is abstract and must be overridden by any non-
abstract derived class - Let's see how
📌 Decorator: In Python, a decorator is a design pattern that allows you to add new functionality
to an existing object without modifying its structure. Decorators are very powerful and useful tools
in Python since they allow programmers to modify the behavior of functions or classes. In our
context, abstractmethod is a decorator provided by the abc module.

📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.

255
📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.

📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.

Example for Clarity:

from abc import ABC, abstractmethod

class AbstractClass(ABC):

@abstractmethod
def abstract_method(self):
pass

class DerivedClass(AbstractClass):

# Notice we are not providing an implementation for abstract_method


pass

class AnotherDerivedClass(AbstractClass):

# Here, we provide an implementation for abstract_method


def abstract_method(self):
print("Implemented abstract_method in AnotherDerivedClass")

In the above code:

📌 AbstractClass is an abstract base class with an abstract method abstract_method .

📌 DerivedClass is a subclass of AbstractClass , but it doesn't provide an implementation for


abstract_method . Hence, DerivedClass is also abstract, and you can't create instances of it.

📌 AnotherDerivedClass is another subclass of AbstractClass , and it provides an


implementation for abstract_method . This makes AnotherDerivedClass non-abstract, and you
can create instances of it.

If you try:

obj = DerivedClass() # This will raise a TypeError

But this will work:

obj = AnotherDerivedClass()
obj.abstract_method() # This will print: "Implemented abstract_method in
AnotherDerivedClass"

256
In essence, the abstractmethod decorator is a way to enforce a contract on subclasses. It ensures
that any non-abstract subclass provides concrete implementations for certain methods deemed
essential by the abstract base class.

Example-2 for Strategy Pattern in Python : Image Processing


Imagine you're building an image processing application where users can apply different filters to
their images. The Strategy Pattern can be used to switch between different image processing
algorithms at runtime.

from abc import ABC, abstractmethod


from typing import List

# Define the ImageFilterStrategy interface


class ImageFilterStrategy(ABC):

@abstractmethod
def apply_filter(self, image: List[List[int]]) -> List[List[int]]:
pass

# Implement concrete strategies


class BlackAndWhiteFilter(ImageFilterStrategy):
def apply_filter(self, image: List[List[int]]) -> List[List[int]]:
# Simplified logic for converting to black and white
for i in range(len(image)):
for j in range(len(image[i])):
pixel_value = image[i][j]
avg = sum(pixel_value) // 3
image[i][j] = [avg, avg, avg]
return image

class SepiaFilter(ImageFilterStrategy):
def apply_filter(self, image: List[List[int]]) -> List[List[int]]:
# Simplified logic for applying sepia filter
for i in range(len(image)):
for j in range(len(image[i])):
r, g, b = image[i][j]
tr = int(0.393 * r + 0.769 * g + 0.189 * b)
tg = int(0.349 * r + 0.686 * g + 0.168 * b)
tb = int(0.272 * r + 0.534 * g + 0.131 * b)
image[i][j] = [tr, tg, tb]
return image

# Context class that uses a strategy


class ImageProcessor:
def __init__(self, filter_strategy: ImageFilterStrategy):
self._filter_strategy = filter_strategy

def process(self, image: List[List[int]]) -> List[List[int]]:


return self._filter_strategy.apply_filter(image)

# Client code
image = [
[[255, 0, 0], [0, 255, 0], [0, 0, 255]],
257
[[128, 128, 128], [64, 64, 64], [32, 32, 32]]
]

processor = ImageProcessor(BlackAndWhiteFilter())
bw_image = processor.process(image)
print(bw_image)

processor = ImageProcessor(SepiaFilter())
sepia_image = processor.process(image)
print(sepia_image)

📌 Explanation of the Code: 1. We start with the ImageFilterStrategy interface that has an
abstract method apply_filter . This interface will be implemented by all concrete filter
strategies. 2. We then define two concrete strategies: BlackAndWhiteFilter and SepiaFilter .
Each implements the apply_filter method with its own logic. 3. The ImageProcessor class,
which acts as the context, uses an image filter strategy to process the image. It's unaware of the
specifics of the filter method; it just invokes the apply_filter method. 4. In the client code, we
can easily switch between image filters by changing the strategy passed to the ImageProcessor .

📌 Under-the-hood: The Strategy Pattern, in this context, allows for a dynamic selection of image
processing algorithms. The ImageProcessor class doesn't need to be aware of the specifics of
each filter. Instead, it relies on the strategy interface, which abstracts the details. This makes it
easy to introduce new filters or modify existing ones without changing the ImageProcessor class.

📌 Benefits: 1. Scalability: As the application grows, adding new filters becomes straightforward.
Just implement a new strategy and integrate it with the client code. 2. Maintenance: Each filter
logic is encapsulated in its own class, making it easier to pinpoint issues or make updates. 3.
Flexibility: Users can dynamically choose the filter they want to apply, providing a versatile user
experience.

This example showcases how the Strategy Pattern can be applied to a real-world scenario in image
processing, making the application more modular and extensible.

Example-3 for Strategy Pattern in Python : Dynamic Pricing


for an E-commerce Platform.
Imagine you're developing an e-commerce platform. Depending on various factors like holidays,
stock availability, user's purchase history, etc., you want to offer dynamic pricing to your users.
The Strategy Pattern can be employed to switch between different pricing algorithms at runtime.

from abc import ABC, abstractmethod

# Define the PricingStrategy interface


class PricingStrategy(ABC):

@abstractmethod
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
pass

# Implement concrete strategies


class HolidayDiscount(PricingStrategy):
258
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
return base_price * 0.9 # 10% discount

class StockClearanceDiscount(PricingStrategy):
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
# Assuming a simplistic logic where certain products need clearance
if product in ["old_model_shoe", "last_season_dress"]:
return base_price * 0.7 # 30% discount
return base_price

class LoyalCustomerDiscount(PricingStrategy):
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
# Assuming a simplistic logic where certain users are considered loyal
if user in ["user123", "user456"]:
return base_price * 0.85 # 15% discount
return base_price

# Context class that uses a strategy


class ECommercePlatform:
def __init__(self, pricing_strategy: PricingStrategy):
self._pricing_strategy = pricing_strategy

def checkout(self, base_price: float, product: str, user: str) -> float:
return self._pricing_strategy.calculate_price(base_price, product, user)

# Client code
platform = ECommercePlatform(HolidayDiscount())
print(platform.checkout(100.0, "new_model_shoe", "user789"))

platform = ECommercePlatform(StockClearanceDiscount())
print(platform.checkout(100.0, "old_model_shoe", "user789"))

platform = ECommercePlatform(LoyalCustomerDiscount())
print(platform.checkout(100.0, "new_model_shoe", "user123"))

📌 Explanation of the Code: 1. We initiate with the PricingStrategy interface that has an
abstract method calculate_price . This interface will be implemented by all concrete pricing
strategies. 2. We then define three concrete strategies: HolidayDiscount ,
StockClearanceDiscount , and LoyalCustomerDiscount . Each implements the
calculate_price method based on its own criteria. 3. The ECommercePlatform class, acting as
the context, uses a pricing strategy to determine the final price. It's agnostic of the specifics of the
pricing method; it simply calls the calculate_price method. 4. In the client code, we can
effortlessly switch between pricing strategies by altering the strategy passed to the
ECommercePlatform .

📌 Under-the-hood: The Strategy Pattern here allows for a dynamic selection of pricing
algorithms. The ECommercePlatform class doesn't need to be aware of the specifics of each
pricing strategy. It relies on the strategy interface, which abstracts the details. This makes it easy to
introduce new pricing strategies or modify existing ones without changing the
ECommercePlatform class.

259
📌 Benefits: 1. Adaptability: As market conditions change, new pricing strategies can be added
without disrupting existing code. 2. Separation of Concerns: Each pricing logic is encapsulated in
its own class, ensuring that changes in one strategy don't affect others. 3. User Experience: By
offering dynamic pricing, users can benefit from various discounts, enhancing their shopping
experience.

This example illustrates how the Strategy Pattern can be applied to a real-world scenario in e-
commerce, making the platform more adaptable and user-friendly.

Example-4 for Strategy Pattern in Python : Route Planning for


a Navigation System.
Imagine you're developing a navigation system. Depending on user preferences or external
factors like traffic, weather, or time of day, you want to offer different route planning strategies.
The Strategy Pattern can be employed to switch between different routing algorithms at runtime.

📌 Real-life Use-case Code:

from abc import ABC, abstractmethod


from typing import List, Tuple

# Define the RouteStrategy interface


class RouteStrategy(ABC):

@abstractmethod
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
pass

# Implement concrete strategies


class ShortestRoute(RouteStrategy):
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
# Simplified logic for shortest route
return [start, end]

class ScenicRoute(RouteStrategy):
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
# Simplified logic for a more scenic route
mid_point = ((start[0] + end[0]) // 2, (start[1] + end[1]) // 2)
return [start, mid_point, end]

class AvoidTrafficRoute(RouteStrategy):
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
# Simplified logic to avoid traffic
detour = (start[0], end[1])
return [start, detour, end]

# Context class that uses a strategy


class NavigationSystem:

260
def __init__(self, route_strategy: RouteStrategy):
self._route_strategy = route_strategy

def plan_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->


List[Tuple[int, int]]:
return self._route_strategy.find_route(start, end)

# Client code
nav_system = NavigationSystem(ShortestRoute())
print(nav_system.plan_route((0, 0), (10, 10)))

nav_system = NavigationSystem(ScenicRoute())
print(nav_system.plan_route((0, 0), (10, 10)))

nav_system = NavigationSystem(AvoidTrafficRoute())
print(nav_system.plan_route((0, 0), (10, 10)))

📌 Explanation of the Code: 1. We begin with the RouteStrategy interface that has an abstract
method find_route . This interface will be implemented by all concrete routing strategies. 2. We
then define three concrete strategies: ShortestRoute , ScenicRoute , and AvoidTrafficRoute .
Each implements the find_route method based on its own criteria. 3. The NavigationSystem
class, acting as the context, uses a routing strategy to determine the best route. It's unaware of
the specifics of the routing method; it simply calls the find_route method. 4. In the client code,
we can easily switch between routing strategies by changing the strategy passed to the
NavigationSystem .

📌 Under-the-hood: The Strategy Pattern here allows for a dynamic selection of routing
algorithms. The NavigationSystem class doesn't need to be aware of the specifics of each routing
strategy. It relies on the strategy interface, which abstracts the details. This makes it easy to
introduce new routing strategies or modify existing ones without changing the NavigationSystem
class.

📌 Benefits: 1. Flexibility: Users can choose the route type they prefer, whether it's the fastest,
the most scenic, or one that avoids traffic. 2. Maintainability: Each routing logic is encapsulated
in its own class, ensuring that changes in one strategy don't affect others. 3. Expandability: As
new routing criteria emerge (e.g., routes that avoid tolls or routes optimized for cycling), new
strategies can be added seamlessly.

This example showcases how the Strategy Pattern can be applied to a real-world scenario in
navigation systems, making the platform more flexible and user-centric.

261
🐍🚀 Template Design Pattern 🐍🚀

📌 The Template Design Pattern is a behavioral design pattern that defines the program
skeleton of an algorithm in a method, but delays some steps to subclasses, i.e. leave the details to
be implemented by the child classes.

It allows subclasses to override certain steps of an algorithm without changing the algorithm's
structure.

This behavioral design pattern is one of the easiest to understand and implement. This design
pattern is used popularly in framework development. This helps to avoid code duplication also.

AbstractClass contains the templateMethod() which should be made final so that it cannot be
overridden. This template method makes use of other operations available in order to run the
algorithm but is decoupled for the actual implementation of these methods. All operations used
by this template method are made abstract, so their implementation is deferred to subclasses.

ConcreteClass implements all the operations required by the templateMethod that were defined
as abstract in the parent class. There can be many different ConcreteClasses.

📌 Use Cases: 1. When you want to let clients extend only particular steps of an algorithm, but
not the whole algorithm or its structure. 2. When you have several classes that contain almost
identical algorithms with some minor differences. As a result, you might need to modify all classes
when the algorithm changes.

📌 The main idea behind this pattern is to define a method (often termed the "template method")
in an abstract base class. This method contains a series of method calls that every subclass will
execute in the same order, but the exact implementation of some of these methods is deferred to
the concrete subclasses.

Let's see an example WITH and then WITHOUT the "Template


Design Pattern"
Without the Template Design Pattern

Let's consider a scenario where we have a system that processes different types of documents.
Each document type has a similar processing flow: load the document, parse the document, and
save the document. However, the way each document type is parsed varies.

class XMLDocument:
def load(self, file_name):
print(f"Loading XML document from {file_name}")

262
def parse(self):
print("Parsing XML document")

def save(self):
print("Saving XML document")

class JSONDocument:
def load(self, file_name):
print(f"Loading JSON document from {file_name}")

def parse(self):
print("Parsing JSON document")

def save(self):
print("Saving JSON document")

📌 The above code has a lot of repetition. The load and save methods are almost identical for
both XMLDocument and JSONDocument .

📌 If we need to add another document type or change the processing flow, we'd have to modify
multiple classes.

With the Template Design Pattern

Let's refactor the code using the Template Design Pattern:

from abc import ABC, abstractmethod

class Document(ABC):
def load(self, file_name):
print(f"Loading {self.get_document_type()} document from {file_name}")

@abstractmethod
def parse(self):
pass

def save(self):
print(f"Saving {self.get_document_type()} document")

@abstractmethod
def get_document_type(self):
pass

def process_document(self, file_name):


self.load(file_name)
self.parse()
self.save()

class XMLDocument(Document):
def parse(self):
print("Parsing XML document")

def get_document_type(self):
return "XML"

263
class JSONDocument(Document):
def parse(self):
print("Parsing JSON document")

def get_document_type(self):
return "JSON"

📌 We've introduced an abstract class Document which acts as the template. It has the common
methods load and save , and an abstract method parse which will be implemented by concrete
subclasses.

📌 The process_document method in the Document class defines the sequence of steps to
process a document. This is the template method.

📌 Concrete classes like XMLDocument and JSONDocument provide the specific implementation for
the parse method.

📌 By using the Template Design Pattern, we've reduced code duplication and made it easier to
add new document types or modify the processing flow.

In conclusion, the Template Design Pattern provides a clear structure that promotes code reuse
and flexibility. It allows us to define a series of steps in an algorithm and let subclasses implement
specific parts of the algorithm without changing its structure.

Let's delve into the details of how the refactored code, which
implements the "Template Design Pattern", addresses the
issues present in the original code.

📌 Issue of Repetition: In the original code, the methods load and save were repeated for both
XMLDocument and JSONDocument . This repetition is not just about duplicating lines of code; it's
about duplicating logic. If we had to change the way documents are loaded or saved, we would
need to make changes in multiple places.

Solution with Template Design Pattern: The refactored code encapsulates the common logic
within the Document abstract class. The load and save methods are defined once in this class,
and all concrete document classes inherit these methods. This means there's a single place to
modify the loading or saving logic, ensuring consistency and reducing maintenance effort.

📌 Flexibility in Algorithm Structure: In the original code, if we wanted to introduce a new step
in the document processing flow or change the order of steps, we would have to modify each
concrete document class.

Solution with Template Design Pattern: The refactored code introduces the process_document
method in the Document class, which defines the sequence of steps (or the algorithm's structure).
If we need to introduce a new step or change the order, we only have to modify this method in
one place. This centralizes the control over the algorithm's structure.

📌 Ease of Extensibility: If we wanted to introduce a new document type in the original code, we
would have to define the entire processing flow for that document, leading to more repetition.

264
Solution with Template Design Pattern: With the refactored code, adding support for a new
document type is as simple as creating a new subclass of Document and providing an
implementation for the abstract methods. The common steps are already defined in the parent
class, so there's no need to redefine them.

📌 Decoupling of High-Level Algorithm from Low-Level Implementation: In the original code,


the high-level algorithm (the sequence of processing steps) and the low-level implementations
(how each step is carried out) were mixed together in each concrete class.

Solution with Template Design Pattern: The refactored code decouples these concerns. The
Document class focuses on the high-level algorithm, defining the sequence of steps in
process_document . The concrete classes, like XMLDocument and JSONDocument , focus on the low-
level implementations, defining how specific steps like parse are carried out. This separation of
concerns makes the code more modular and easier to understand.

In essence, the Template Design Pattern in the refactored code promotes code reusability,
centralizes control over the algorithm's structure, enhances flexibility, and provides a clear
separation of concerns. All these benefits address the issues present in the original code, making
the system more maintainable and scalable.

📌 Real-life Use-Case: Let's consider a real-world scenario of a data processing system where raw
data needs to be loaded, processed, and then saved. The steps for loading and saving data might
be the same for different types of data, but the processing step might differ.

from abc import ABC, abstractmethod

class DataProcessor(ABC):

def load_data(self):
# This method contains the generic way to load data
print("Loading data...")

@abstractmethod
def process_data(self):
pass

def save_data(self):
# This method contains the generic way to save data
print("Saving data...")

def execute(self):
# This is the template method
self.load_data()
self.process_data()
self.save_data()

class ImageProcessor(DataProcessor):

def process_data(self):
print("Processing image data...")

class TextProcessor(DataProcessor):
265
def process_data(self):
print("Processing text data...")

# Client code
image_processor = ImageProcessor()
image_processor.execute()

text_processor = TextProcessor()
text_processor.execute()

📌 Description of the Example Code:


We have an abstract base class DataProcessor which has the template method execute() .
This method defines the order in which the methods load_data() , process_data() , and
save_data() are called.

The methods load_data() and save_data() have a default implementation, but the
process_data() method is abstract, meaning that every concrete subclass must provide its
own implementation for this method.

We then have two concrete subclasses: ImageProcessor and TextProcessor . Each of these
provides its own implementation of the process_data() method.

In the client code, we create instances of ImageProcessor and TextProcessor and call their
execute() methods. This demonstrates how the template method ensures the steps are
executed in the same order, but the processing step can vary based on the concrete class.

📌 The beauty of the Template Design Pattern is that it provides a clear separation between the
generic algorithm and the specific steps that can be customized by subclasses. This promotes
code reuse and flexibility.

Let's see how the above code example adheres to the


principles and requirements of the Template Design Pattern.
in Python
📌 Algorithm Skeleton in Base Class: The DataProcessor class acts as the base class that
defines the skeleton of the data processing algorithm. This is evident in the execute() method,
which outlines the sequence of steps to be followed.

def execute(self):
self.load_data()
self.process_data()
self.save_data()

In the above code, the execute() method is the template method that dictates the order of
operations. It ensures that data is first loaded, then processed, and finally saved.

📌 Deferring Specific Steps to Subclasses: The process_data() method in the DataProcessor


class is marked as an abstract method. This means that while the base class provides the overall
structure of the algorithm, it intentionally leaves out the implementation of this specific step,
expecting the concrete subclasses to provide their own versions.

266
@abstractmethod
def process_data(self):
pass

In the provided code, both ImageProcessor and TextProcessor subclasses provide their own
implementations of the process_data() method:

def process_data(self):
print("Processing image data...")

and

def process_data(self):
print("Processing text data...")

This showcases the principle of allowing subclasses to redefine certain steps of the algorithm
without altering its overall structure.

📌 Maintaining the Algorithm's Structure: Even though the process_data() method's


implementation varies between subclasses, the overall structure of the algorithm remains
unchanged. When the execute() method is called on any subclass instance, the sequence of
operations remains: load data, process data, and save data. This is a core tenet of the Template
Design Pattern, ensuring that while specific steps can be customized, the overarching structure
remains consistent.

📌 Encapsulation of Invariant Steps: The methods load_data() and save_data() in the


DataProcessor class encapsulate steps of the algorithm that are invariant (i.e., they don't change
across different types of data processing). By providing a default implementation for these
methods in the base class, the pattern ensures that these steps are consistent across all
subclasses and don't need to be redefined unless there's a specific need.

def load_data(self):
print("Loading data...")

and

def save_data(self):
print("Saving data...")

In summary, the provided code example adheres to the principles of the Template Design Pattern
by defining a clear algorithm structure in the base class, allowing subclasses to customize specific
steps, and ensuring that the overall sequence of operations remains consistent across all
subclasses.

Example-2 of the Template Design Pattern. in Python


📌 Real-life Use-Case: Let's consider a scenario of building different types of computer systems.
The process of building a computer generally involves selecting components, assembling them,
installing software, and then performing tests. However, the specifics of these steps might vary
based on the type of computer being built (e.g., a gaming computer vs. a server).

267
from abc import ABC, abstractmethod

class ComputerBuilder(ABC):

def select_components(self):
# Generic method to select components
print("Selecting basic components...")

@abstractmethod
def assemble_components(self):
pass

def install_software(self):
# Generic method to install software
print("Installing basic software...")

@abstractmethod
def perform_tests(self):
pass

def build(self):
# This is the template method
self.select_components()
self.assemble_components()
self.install_software()
self.perform_tests()

class GamingComputerBuilder(ComputerBuilder):

def assemble_components(self):
print("Assembling components optimized for gaming...")

def perform_tests(self):
print("Performing high-end graphics and performance tests...")

class ServerComputerBuilder(ComputerBuilder):

def assemble_components(self):
print("Assembling components optimized for server operations...")

def perform_tests(self):
print("Performing load and stress tests...")

# Client code
gaming_computer = GamingComputerBuilder()
gaming_computer.build()

server_computer = ServerComputerBuilder()
server_computer.build()

📌 Description of the Example Code:

268
The ComputerBuilder class is the abstract base class that defines the skeleton of the
computer-building process. The build() method is the template method that dictates the
order of operations: selecting components, assembling them, installing software, and
performing tests.

The methods select_components() and install_software() have default


implementations, representing steps that are common across all types of computer builds.

The methods assemble_components() and perform_tests() are abstract, meaning that


concrete subclasses must provide their own implementations. These methods represent
steps that can vary based on the type of computer being built.

The GamingComputerBuilder subclass provides implementations optimized for building


gaming computers. It assembles gaming-specific components and performs high-end
graphics and performance tests.

The ServerComputerBuilder subclass provides implementations optimized for building


server computers. It assembles server-specific components and performs load and stress
tests.

In the client code, we create instances of both GamingComputerBuilder and


ServerComputerBuilder and call their build() methods. This demonstrates how the
template method ensures the steps are executed in the same order, but specific steps like
assembly and testing can vary based on the type of computer.

Let's see how the above code example adheres to the principles and requirements of the
Template Design Pattern. in Python

📌 Algorithm Skeleton in Base Class: The ComputerBuilder class serves as the base class that
outlines the skeleton of the computer-building algorithm. This is evident in the build() method,
which lays out the sequence of steps to be followed.

def build(self):
self.select_components()
self.assemble_components()
self.install_software()
self.perform_tests()

In this code segment, the build() method (our template method) dictates the order of
operations. It ensures that components are first selected, then assembled, followed by software
installation, and finally, tests are performed.

📌 Deferring Specific Steps to Subclasses: The methods assemble_components() and


perform_tests() in the ComputerBuilder class are marked with the @abstractmethod
decorator. This indicates that the base class provides the overarching structure of the algorithm
but intentionally omits the implementation of these specific steps, expecting the concrete
subclasses to furnish their own versions.

269
@abstractmethod
def assemble_components(self):
pass

@abstractmethod
def perform_tests(self):
pass

In the provided code, the GamingComputerBuilder and ServerComputerBuilder subclasses each


offer their distinct implementations of the assemble_components() and perform_tests()
methods:

def assemble_components(self):
print("Assembling components optimized for gaming...")

and

def assemble_components(self):
print("Assembling components optimized for server operations...")

This exemplifies the principle of enabling subclasses to redefine specific steps of the algorithm
without altering its overarching structure.

📌 Maintaining the Algorithm's Structure: Even though the implementations of


assemble_components() and perform_tests() vary between subclasses, the overall structure of
the algorithm remains consistent. When the build() method is invoked on any subclass instance,
the sequence of operations remains: select components, assemble them, install software, and
perform tests. This consistency in the overarching structure is a hallmark of the Template Design
Pattern.

📌 Encapsulation of Invariant Steps: The methods select_components() and


install_software() in the ComputerBuilder class encapsulate steps of the algorithm that are
invariant (i.e., they remain consistent across different types of computer builds). By offering a
default implementation for these methods in the base class, the pattern ensures that these steps
remain consistent across all subclasses and don't need to be redefined unless there's a specific
requirement.

def select_components(self):
print("Selecting basic components...")

and

def install_software(self):
print("Installing basic software...")

In conclusion, the provided code example adheres to the principles of the Template Design
Pattern by establishing a clear algorithm structure in the base class, allowing subclasses to
customize specific steps, and ensuring that the overall sequence of operations remains consistent
across all subclasses.

270
Why did I use the @abstractmethod in above examples
📌 The @abstractmethod is a decorator provided by the abc (Abstract Base Class) module in
Python. It's used to declare that a method is abstract, which means:

1. The method does not have a concrete implementation in the base class.

2. Any concrete (non-abstract) subclass must provide an implementation for this method.

📌 In the context of the Template Design Pattern:


The abstract methods represent the "hooks" or "placeholders" for the parts of the algorithm
that can vary or are meant to be customized by the subclasses.

By marking a method as abstract, we're signaling to the developer that this particular step of
the algorithm is intended to be overridden by subclasses to provide specific behavior.

📌 In the computer-building example I provided:


The methods assemble_components() and perform_tests() were marked as abstract
because the way you assemble components for a gaming computer might be different from a
server computer. Similarly, the tests you'd run on these two types of computers would differ.

By using @abstractmethod , we ensure that any concrete subclass of ComputerBuilder (like


GamingComputerBuilder or ServerComputerBuilder ) must provide its own implementation
of these methods. If a developer creates a new subclass and forgets to implement any of the
abstract methods, Python will raise a TypeError when they try to instantiate the subclass,
thus catching the oversight early in the development process.

📌 In essence, the @abstractmethod serves a dual purpose: 1. It provides a clear contract for
developers, indicating which methods they must implement in concrete subclasses. 2. It ensures
the integrity of the Template Design Pattern by mandating that the customizable parts of the
algorithm are indeed customized by the subclasses.

Example - 3 : Template Design Pattern in Python


📌 Real-life Use-Case: Let's consider the scenario of an online shopping platform. The general
process of placing an order involves adding items to the cart, calculating the total, applying
discounts (if any), and finally processing the payment. However, the specifics of payment
processing might vary based on the payment method chosen (e.g., credit card, PayPal,
cryptocurrency).

from abc import ABC, abstractmethod

class OnlineOrder(ABC):

def add_to_cart(self, items):


self.items = items
print(f"Added {len(items)} items to the cart.")

def calculate_total(self):
self.total = sum(self.items.values())
print(f"Total amount: ${self.total}")

def apply_discount(self):

271
# For simplicity, let's apply a generic 10% discount
self.total *= 0.9
print(f"Discount applied. New total: ${self.total}")

@abstractmethod
def process_payment(self):
pass

def checkout(self, items):


# This is the template method
self.add_to_cart(items)
self.calculate_total()
self.apply_discount()
self.process_payment()

class CreditCardOrder(OnlineOrder):

def process_payment(self):
print("Processing payment through credit card...")

class PayPalOrder(OnlineOrder):

def process_payment(self):
print("Processing payment through PayPal...")

class CryptoOrder(OnlineOrder):

def process_payment(self):
print("Processing payment through cryptocurrency...")

# Client code
items = {'book': 20, 'pen': 2, 'laptop': 1000}
credit_card_order = CreditCardOrder()
credit_card_order.checkout(items)

paypal_order = PayPalOrder()
paypal_order.checkout(items)

crypto_order = CryptoOrder()
crypto_order.checkout(items)

📌 Description of the Example Code:


The OnlineOrder class is the abstract base class that defines the skeleton of the order
placement process. The checkout() method is the template method that dictates the order
of operations: adding items to the cart, calculating the total, applying discounts, and
processing the payment.

The methods add_to_cart() , calculate_total() , and apply_discount() have default


implementations, representing steps that are common across all types of payment methods.

The process_payment() method is abstract, meaning that concrete subclasses must provide
their own implementations. This method represents the step that can vary based on the
payment method chosen.

272
The CreditCardOrder , PayPalOrder , and CryptoOrder subclasses each provide their
specific implementations of the process_payment() method, tailored to their respective
payment methods.

In the client code, we create instances of the different order types and call their checkout()
methods. This demonstrates how the template method ensures the steps are executed in the
same order, but the payment processing step can vary based on the chosen payment
method.

🐍🚀 Repository Pattern in Python 🐍🚀

The Repository Pattern is a design pattern used in software development to abstract away the
complexities of accessing data from different data sources, be it databases, APIs, or even in-
memory storage. It provides a clean interface for the rest of the application to access data without
concerning itself with the underlying data access mechanism.

By doing so:

1. Decoupling: The main application logic remains decoupled from the underlying data source.
This means that if we change from one type of database to another, or from a database to an
API, the main application code doesn't need significant changes.

2. Testing: It facilitates easier testing because mock repositories can be used in place of real
ones. This way, unit tests won't have any external dependencies.

3. Consistency: The Repository Pattern enforces consistent access patterns which can lead to
improved maintainability and predictability.

4. Abstraction: The actual operations, like CRUD (Create, Read, Update, Delete) operations, are
abstracted behind a consistent interface. This means that the main application logic doesn't
need to know about SQL queries, API calls, or other data retrieval methods.

📌 Real-life Use Case Code of Repository Pattern:


273
Imagine we are designing a system for a book store which uses an SQL database initially, but we
want the flexibility to switch to a NoSQL database in the future without many changes.

Here's a simplified example:

from abc import ABC, abstractmethod


from typing import List, Dict, Union

# Define the book entity


class Book:
def __init__(self, id: int, title: str, author: str):
self.id = id
self.title = title
self.author = author

# Repository interface
class BookRepository(ABC):

@abstractmethod
def add(self, book: Book):
pass

@abstractmethod
def get(self, id: int) -> Union[Book, None]:
pass

@abstractmethod
def list(self) -> List[Book]:
pass

@abstractmethod
def update(self, book: Book):
pass

@abstractmethod
def delete(self, id: int):
pass

# SQL implementation of the repository


class SQLBookRepository(BookRepository):

def __init__(self):
# This is a pseudo-database for demonstration purposes.
self._db = {}

def add(self, book: Book):


self._db[book.id] = book

def get(self, id: int) -> Union[Book, None]:


return self._db.get(id)

def list(self) -> List[Book]:


return list(self._db.values())

def update(self, book: Book):

274
if book.id in self._db:
self._db[book.id] = book

def delete(self, id: int):


if id in self._db:
del self._db[id]

# Service layer which uses the repository


class BookService:

def __init__(self, repo: BookRepository):


self._repo = repo

def add_book(self, title: str, author: str):


# In reality, we'd have more logic here like ID generation, validation,
etc.
book = Book(id=123, title=title, author=author)
self._repo.add(book)

Here, the BookService only interacts with the abstract BookRepository . Even if we change the
data source from SQL to NoSQL or an API in the future, we just need a new repository
implementation. The service layer remains unchanged. This is a simplified example; in production
grade code there would be more intricate handling, error management, and optimizations.

📌 Abstraction of Data Access Layer


The main essence of the Repository Pattern is to provide an abstraction over the data access layer.
In the above code:

This BookRepository class is an abstract base class (ABC) that provides a contract for what
methods a book repository should have. By setting these methods as abstractmethod s, we
ensure that any concrete implementation of the repository will provide these methods.

📌 Decoupling Application Logic from Data Access Logic


In the service layer:

class BookService:

def __init__(self, repo: BookRepository):


self._repo = repo

The BookService is initialized with an object that adheres to the BookRepository contract (i.e.,
has the methods defined by the abstract base class). This way, the service layer is decoupled from
the data access logic. It doesn't matter if the underlying implementation is SQL, NoSQL, or another
data source. As long as the data source adheres to the contract, the service layer remains
unchanged. This adheres to the Dependency Inversion Principle, a fundamental SOLID principle.

📌 Concrete Implementation of the Repository


The actual data operations are carried out in the concrete implementations of the repository. For
example:

275
class SQLBookRepository(BookRepository):
...

This SQLBookRepository is a concrete implementation of the BookRepository interface. While in


this example, we used a pseudo-database (a simple dictionary), in a real-world application, this
class would handle SQL queries to the database.

📌 Consistent Access Patterns


All data access operations, like adding a book, retrieving a book, listing all books, etc., are funneled
through the methods defined in the BookRepository interface:

add(book: Book)

get(id: int) -> Union[Book, None]

list() -> List[Book]

update(book: Book)

delete(id: int)

This ensures that data is accessed and manipulated in a consistent manner, regardless of the
underlying data source.

📌 Flexibility and Futureproofing


Because the application logic (in BookService ) only interacts with the abstract repository,
switching from one storage mechanism to another is less painful. Let's say we want to migrate
from an SQL database to a NoSQL database. All we would need to do is:

1. Create a new repository implementation (e.g., NoSQLBookRepository ) that adheres to the


BookRepository contract.

2. Replace the repository instance in the service layer from SQLBookRepository to


NoSQLBookRepository .

The main application logic remains untouched, which reduces bugs, makes the codebase more
maintainable, and speeds up development when migrating or extending data sources.

📌 Testing and Mocking


By having an abstract base class for the repository, it becomes simpler to test services that
depend on the data. A mock repository adhering to the same contract (i.e., having the same
methods) can be provided to the service during testing, thus isolating the service logic from actual
data operations. This ensures that tests are fast, reliable, and not dependent on external factors
like database state.

In summary, the provided code embodies the principles of the Repository Pattern by abstracting
data operations, decoupling application logic from data access logic, ensuring consistent data
access patterns, and providing flexibility for future changes and testing.

276
Let's see an example WITH and then WITHOUT the
"Repository Pattern in Python`
📌 Code Example Without Using Repository Pattern:
Imagine we're building a book management system that directly interacts with a database.

class Book:
def __init__(self, id: int, title: str, author: str):
self.id = id
self.title = title
self.author = author

class BookDB:
def __init__(self):
self._db = {}

def add_book(self, book: Book):


self._db[book.id] = book

def get_book(self, id: int) -> Book:


return self._db.get(id)

def list_books(self) -> list:


return list(self._db.values())

def update_book(self, book: Book):


if book.id in self._db:
self._db[book.id] = book

def delete_book(self, id: int):


if id in self._db:
del self._db[id]

bookDB = BookDB()
book = Book(1, "Harry Potter", "J.K. Rowling")
bookDB.add_book(book)

📌 Issues With Above Code:


1. Tightly Coupled: The BookDB class directly interacts with the pseudo-database. If we want to
change the data source (e.g., from in-memory dictionary to SQL), we would need to modify
the BookDB class and all places where it's used.

2. Hard to Test: Since data access logic is directly embedded, unit testing becomes complex.
We'd have to mock the data source every time we want to test the application logic.

3. Not Scalable: If we want to introduce new data sources or services, we would have to
expand the BookDB class, leading to bloated classes that violate the Single Responsibility
Principle.

4. Not Abstracted: Business logic and data access logic are mixed together. A better approach
would keep them separate.

📌 Refactored Code Using Repository Pattern:

277
from abc import ABC, abstractmethod

class Book:
def __init__(self, id: int, title: str, author: str):
self.id = id
self.title = title
self.author = author

# Repository interface
class IBookRepository(ABC):

@abstractmethod
def add(self, book: Book):
pass

@abstractmethod
def get(self, id: int) -> Book:
pass

@abstractmethod
def list(self) -> list:
pass

@abstractmethod
def update(self, book: Book):
pass

@abstractmethod
def delete(self, id: int):
pass

# In-memory implementation of the repository


class InMemoryBookRepository(IBookRepository):

def __init__(self):
self._db = {}

def add(self, book: Book):


self._db[book.id] = book

def get(self, id: int) -> Book:


return self._db.get(id)

def list(self) -> list:


return list(self._db.values())

def update(self, book: Book):


if book.id in self._db:
self._db[book.id] = book

def delete(self, id: int):


if id in self._db:
del self._db[id]

# Service layer which uses the repository


class BookService:
278
def __init__(self, repo: IBookRepository):
self._repo = repo

def add_book(self, title: str, author: str) -> Book:


# For simplicity, we're using sequential IDs. In real applications, UUIDs
or database auto-increments would be used.
book_id = len(self._repo.list()) + 1
book = Book(book_id, title, author)
self._repo.add(book)
return book

bookRepo = InMemoryBookRepository()
bookService = BookService(bookRepo)
new_book = bookService.add_book("Harry Potter", "J.K. Rowling")

📌 Benefits of Refactored Code:


1. Separation of Concerns: The data access logic is abstracted away from the business logic.

2. Flexibility: We can easily replace or add new data sources by implementing the
IBookRepository interface.

3. Testability: Testing becomes easier as we can mock the repository interface to test the
service layer.

4. Maintainability: If we need to change the data source or add functionalities, the codebase
becomes easier to manage due to its modular structure.

Let's actually analyze in detail the benefits of the refactored


code after introducing the "Repository Pattern in Python`"
📌 Separation of Concerns:
In the refactored code, we distinctly separate the data access logic from the business logic.

Original Code: - The BookDB class was directly responsible for both handling the data (via a
pseudo-database) and the operations associated with the books.

Refactored Code: - We introduced an interface IBookRepository , which only defines the contract
for data operations. - The InMemoryBookRepository class implements this interface and is
responsible purely for data operations. - The BookService class becomes the primary interface
for consumers and handles the business logic, relying on the repository for data access.

This separation means changes to the data layer don't necessarily impact the business logic and
vice versa, ensuring modularity and readability.

📌 Flexibility in Data Source Management:


The original code directly worked with a pseudo-database (dictionary). Any change in data source
would mean altering the BookDB class.

Refactored Code: - With the introduction of the IBookRepository interface, we can now have
multiple implementations of data sources (e.g., SQL, NoSQL, APIs, etc.) without changing our
service layer. - This allows for flexibility in swapping or integrating new data sources.

📌 Improved Testability:
279
Testing was complex in the original code, as it required mocking the data source every time.

Refactored Code: - The separation of concerns allows us to mock the IBookRepository interface
and test the BookService class in isolation. - This ensures that our unit tests are fast, consistent,
and not reliant on the state of an external database.

📌 Enhanced Scalability:
In the original code, introducing new data sources or expanding functionalities would lead to
bloating of the BookDB class.

Refactored Code: - Due to the modular structure, adding new functionalities or data sources
becomes more manageable. - If a new data source needs to be added, we only need to create a
new class that implements the IBookRepository interface, without touching the existing code.

📌 Adherence to the Single Responsibility Principle:


The original BookDB class had multiple responsibilities, making it prone to changes for various
reasons.

Refactored Code: - Each class has a distinct responsibility. - InMemoryBookRepository handles


data operations for an in-memory data source. - BookService handles business logic.

This design ensures that each class changes only for one reason, thereby adhering to the Single
Responsibility Principle, a fundamental SOLID principle.

📌 Enhanced Maintainability:
Original Code: - Combining business logic with data access logic can make future changes
challenging, especially as the system grows.

Refactored Code: - Due to the separation of concerns and a modular approach, the maintainability
of the codebase improves. - Developers can quickly identify where changes are needed and can be
confident that changes in one section won't inadvertently affect another.

In summary, the refactored code using the "Repository Pattern in Python" addresses the issues of
tight coupling, testability, scalability, and maintainability present in the original code. The pattern
provides a clean, modular, and flexible architecture that's beneficial for large-scale, production-
grade applications.

Let's see another example WITH and then WITHOUT the


"Repository Pattern in Python`
📌 Code Example Without Using Repository Pattern:
Let's consider a system that manages users and their roles within an organization.

class User:
def __init__(self, id: int, name: str, role: str):
self.id = id
self.name = name
self.role = role

class UserDB:
def __init__(self):
self._db = {}
280
def add_user(self, user: User):
self._db[user.id] = user

def get_user(self, id: int) -> User:


return self._db.get(id)

def list_users(self) -> list:


return list(self._db.values())

def update_user(self, user: User):


if user.id in self._db:
self._db[user.id] = user

def delete_user(self, id: int):


if id in self._db:
del self._db[id]

userDB = UserDB()
user = User(1, "Alice", "Engineer")
userDB.add_user(user)

📌 Issues With Above Code:


1. Tightly Coupled: Direct interaction with the pseudo-database. Changing the data source
would necessitate alterations in UserDB and wherever it's referenced.

2. Testing Difficulties: Embedded data access logic complicates unit testing.

3. Limited Scalability: Introducing new data sources/services would bloat UserDB .

4. Business and Data Logic Conflation: There's a lack of clear division between these logic
types.

📌 Refactored Code Using Repository Pattern:

from abc import ABC, abstractmethod

class User:
def __init__(self, id: int, name: str, role: str):
self.id = id
self.name = name
self.role = role

# Repository interface
class IUserRepository(ABC):

@abstractmethod
def add(self, user: User):
pass

@abstractmethod
def get(self, id: int) -> User:
pass

@abstractmethod
def list(self) -> list:
281
pass

@abstractmethod
def update(self, user: User):
pass

@abstractmethod
def delete(self, id: int):
pass

# In-memory implementation of the repository


class InMemoryUserRepository(IUserRepository):

def __init__(self):
self._db = {}

def add(self, user: User):


self._db[user.id] = user

def get(self, id: int) -> User:


return self._db.get(id)

def list(self) -> list:


return list(self._db.values())

def update(self, user: User):


if user.id in self._db:
self._db[user.id] = user

def delete(self, id: int):


if id in self._db:
del self._db[id]

# Service layer using the repository


class UserService:

def __init__(self, repo: IUserRepository):


self._repo = repo

def register_user(self, name: str, role: str) -> User:


user_id = len(self._repo.list()) + 1
user = User(user_id, name, role)
self._repo.add(user)
return user

userRepo = InMemoryUserRepository()
userService = UserService(userRepo)
new_user = userService.register_user("Bob", "Manager")

📌 Benefits of Refactored Code:


1. Decoupling: Business logic ( UserService ) is distinct from data access logic
( InMemoryUserRepository ).

2. Data Source Independence: With IUserRepository , we can effortlessly introduce new data
implementations without disturbing the business layer.
282
3. Ease of Testing: Mocking IUserRepository allows isolated testing of UserService .

4. Modularity: The structured code promotes more effortless maintainability and scalability.

5. Clear Role Definition: Each class and interface has a well-defined purpose, promoting clarity
and reducing confusion.

With the Repository Pattern, the code is cleaner, more maintainable, and extensible, aligning with
best practices found in mature, production-grade systems.

Let's actually analyze in detail the benefits of the refactored


code after introducing the "Repository Pattern in Python`"
Let's break down the refactored code's components and explain how each addresses the issues in
the original implementation:

📌 Separation of Business Logic from Data Access Logic:


Original Code: - In the initial design, the UserDB class was intertwined with business operations
(like adding a user) and the storage mechanism.

Refactored Code: - The repository pattern introduces a clear separation between the business
operations and the data storage mechanism. - UserService manages the operations like user
registration. - The InMemoryUserRepository class deals exclusively with the in-memory storage
operations.

Benefits: - This separation allows developers to modify business logic without affecting data
operations and vice versa. It promotes a modular approach, making the code easier to maintain
and understand.

📌 Abstraction Over Data Source:


Original Code: - UserDB directly interacted with an in-memory pseudo-database. Switching to
another database system would necessitate rewriting parts of this class.

Refactored Code: - The introduction of the IUserRepository interface abstracts the data storage
details. Any new storage mechanism, whether SQL, NoSQL, or an API, can be incorporated by
implementing this interface.

Benefits: - This abstraction provides flexibility, ensuring that the system can seamlessly adapt to
changes or expansions in data storage mechanisms without altering the business logic.

📌 Facilitated Testing:
Original Code: - The original design made unit testing challenging due to its integrated data access
logic.

Refactored Code: - With the IUserRepository interface, we can mock the data operations,
enabling the testing of the UserService class in isolation.

Benefits: - Tests become more focused, faster, and less prone to external disruptions, leading to a
more reliable codebase.

📌 Scalability and Maintainability:


Original Code: - Expanding the original codebase's functionalities or adding new data sources
would make the UserDB class more complex.

283
Refactored Code: - The repository pattern ensures each component has a specific role. If a new
data source or functionality needs to be added, the modular approach ensures minimal disruption
to existing code.

Benefits: - Scalability becomes more straightforward, and the risk of introducing errors during
expansion is minimized. The system becomes more maintainable due to its structured and
modular nature.

📌 Adherence to Single Responsibility Principle:


Original Code: - The UserDB class had mixed responsibilities---managing data and handling
business operations.

Refactored Code: - Each class now has a clear responsibility. The InMemoryUserRepository
handles data operations, and UserService handles business operations.

Benefits: - This adherence to the Single Responsibility Principle, a core tenet of SOLID principles,
ensures that the system remains robust. Each component has a clear purpose, reducing the
likelihood of unintended side effects when making changes.

In summary, the refactored code employing the "Repository Pattern in Python" addresses the
original issues of tight coupling, testability, scalability, and mixed responsibilities. Adopting this
pattern leads to a more modular, maintainable, and resilient architecture suitable for large-scale,
professional applications.

284

You might also like