VOL 2 Python Design Pattern March 1 2025
VOL 2 Python Design Pattern March 1 2025
md 2025-03-01
📖 This book is focused 80% on Code examples and 20% on Theory and this very large work across 1300+
pages will make you a Pro Python Engineer, if you sincerely complete the book. Almost all the code
example, you will face in daily life as a Python Engineer and / or taken from actual Interview situation - that
I have meticulously collected from my many years in the Industry as a Software Engineer.
For each and every code example, I analyze the code's output, with very detailed "under-the-hood"
explanations of "How" and "Why" about the output.
👉 If you really understand and master these few core fundamentals of a Language, you will get the
complete mental framework to understand and execute most real-life coding situations.
👉 Hence, I wrote this book to give you a solid grip on the core principles of Python that will be needed as a
real-life Python Engineer on a daily basis.
👉 And also these are the topics, favorite for Interviewers to ask you to check your understanding of Core
Python. If you can talk about these topics for half an hour, in most cases you will win the job.
1/1
- Bridge Design Pattern in Python
While the adapter pattern is used to make unrelated classes work together, the bridge pattern is
designed upfront to decouple an implementation from its abstraction.
Using the bridge pattern is a good idea when you want to share an implementation among
multiple objects. Basically, instead of implementing several specialized classes, defining all that is
required within each class, you can define the following special components:
• An abstraction that applies to all the classes • A separate interface for the different objects
involved
📌 In the Bridge Design Pattern the main goal is to separate the high-level logic from the low-level
operations.
📌 Use Cases for the Bridge Design Pattern: 1. When you want to avoid a permanent binding
between an abstraction and its implementation. This is especially useful if the implementation
must be selected or switched at runtime.
2. When both the abstractions and their implementations should be extensible by subclassing.
With the Bridge pattern, you can combine different abstractions and implementations and
extend them independently.
3. When you want to hide the implementation of an abstraction completely from clients. This is
beneficial when you need to share an implementation among multiple objects.
3. Implementor: This is the interface for the operations. Concrete classes implement this
interface.
2
4. ConcreteImplementor: This is a subclass of Implementor and implements the concrete
operations.
class Abstraction:
def __init__(self, implementation: Implementation) -> None:
self.implementation = implementation
class ExtendedAbstraction(Abstraction):
def operation(self) -> str:
return (f"ExtendedAbstraction: Extended operation with:\n"
f"{self.implementation.operation_implementation()}")
class Implementation(ABC):
@abstractmethod
def operation_implementation(self) -> str:
pass
class ConcreteImplementationA(Implementation):
def operation_implementation(self) -> str:
return "ConcreteImplementationA: Here's the result on the platform A."
class ConcreteImplementationB(Implementation):
def operation_implementation(self) -> str:
return "ConcreteImplementationB: Here's the result on the platform B."
print(abstraction.operation(), end="")
# ...
3
if __name__ == "__main__":
implementation = ConcreteImplementationA()
abstraction = Abstraction(implementation)
client_code(abstraction)
print("\n")
implementation = ConcreteImplementationB()
abstraction = ExtendedAbstraction(implementation)
client_code(abstraction)
📌 Abstraction and Implementation: The core idea behind the Bridge pattern is to separate the
abstraction from its implementation so that both can evolve independently. In the provided code:
- Abstraction and ExtendedAbstraction represent the abstraction side. - Implementation
(and its concrete classes ConcreteImplementationA and ConcreteImplementationB ) represent
the implementation side.
📌 Abstraction Class: The Abstraction class defines an interface that maintains a reference to
an Implementation object. This reference allows the Abstraction to delegate the real work to
the Implementation object. The operation method in Abstraction is a higher-level method
that uses the primitive operation provided by the Implementation .
📌 Implementation Interface: The Implementation class is an abstract class (or interface) that
defines the method operation_implementation . This method is the primitive operation that all
concrete implementations must provide. The Abstraction relies on this method to perform its
tasks.
📌 Client Code and Composition: The client_code function demonstrates how the abstraction
works with the implementation. It takes an Abstraction object and calls its operation method.
The main point here is that the client code is decoupled from the specific implementation. It works
with any combination of Abstraction and Implementation .
In both cases, the client code remains unchanged, demonstrating the flexibility and decoupling
achieved by the Bridge pattern.
4
📌 Under-the-hood Theory: The Bridge pattern's primary goal is to decouple an abstraction from
its implementation, allowing both to vary independently. This is achieved by establishing a bridge
between the two hierarchies. In traditional OOP, we might have used inheritance to bind an
abstraction to its implementation. However, this tight coupling can lead to problems in large
systems, making them less flexible and harder to modify. The Bridge pattern addresses this by
using composition instead of inheritance. The abstraction maintains a reference to the
implementation, and any refined abstractions can work with any implementation, allowing for a
Cartesian product of combinations.
📌 Conclusion: The provided code effectively demonstrates the Bridge Design Pattern by
decoupling the abstraction ( Abstraction and ExtendedAbstraction ) from its implementation
( ConcreteImplementationA and ConcreteImplementationB ). This decoupling is achieved
through composition, where the abstraction maintains a reference to the implementation. The
client code, as a result, can work with any combination of abstraction and implementation,
showcasing the flexibility of the pattern.
class CircleRenderer:
def render(self, radius):
return f"Drawing a circle of radius {radius}"
class SquareRenderer:
def render(self, side):
return f"Drawing a square with side {side}"
class Circle:
def __init__(self, radius):
self.radius = radius
self.renderer = CircleRenderer()
def draw(self):
return self.renderer.render(self.radius)
class Square:
def __init__(self, side):
self.side = side
self.renderer = SquareRenderer()
def draw(self):
return self.renderer.render(self.side)
📌 Now, let's refactor the code using the Bridge Design Pattern:
# Implementor
class Renderer:
def render_circle(self, radius):
pass
# Concrete Implementor
class VectorRenderer(Renderer):
def render_circle(self, radius):
return f"Drawing a vector circle of radius {radius}"
# Concrete Implementor
class RasterRenderer(Renderer):
def render_circle(self, radius):
return f"Drawing a raster circle of radius {radius}"
# Abstraction
class Shape:
def __init__(self, renderer):
self.renderer = renderer
# Refined Abstraction
class Circle(Shape):
def __init__(self, renderer, radius):
super().__init__(renderer)
self.radius = radius
def draw(self):
return self.renderer.render_circle(self.radius)
# Refined Abstraction
class Square(Shape):
def __init__(self, renderer, side):
super().__init__(renderer)
self.side = side
def draw(self):
return self.renderer.render_square(self.side)
📌 Explanation:
6
1. We have an Implementor ( Renderer ) that provides the interface for the rendering
operations.
4. We have two Refined Abstractions ( Circle and Square ) that extend the Shape class and
use the Renderer to perform their drawing operations.
📌 Benefits:
1. Shapes and renderers are decoupled. We can easily add a new shape or a new renderer
without modifying the existing code.
2. The code is more maintainable and flexible. We can switch rendering methods on the fly.
📌 The Bridge Pattern allows us to separate the abstraction from its implementation, giving us the
flexibility to vary them independently. This is achieved by establishing a bridge between the two.
In our example, the bridge is the relationship between the Shape and the Renderer .
Let's delve deep into how the refactored code using the
Bridge Design Pattern addresses the issues present in the
original code.
📌 Issue 1: Combinatorial Explosion of Classes
In the original code, for every new shape or rendering method we introduce, we would need to
create a new class. This means if we had 5 shapes and 3 rendering methods, we would end up
with 15 different classes (5 shapes x 3 renderers). This is not scalable and would quickly become
unmanageable as the number of shapes or renderers grows.
In the refactored code, if we want to add a new shape, we simply create a new class for that
shape, inheriting from the Shape abstraction. If we want to add a new rendering method, we
create a new renderer class inheriting from the Renderer interface. This way, the number of
classes grows linearly with the number of shapes or renderers, not exponentially.
7
The Bridge Pattern promotes loose coupling. In the refactored code, shapes and renderers are
decoupled. The Shape class (and its subclasses) doesn't directly call a specific rendering method.
Instead, it delegates the rendering to the Renderer interface. This allows us to easily switch
rendering methods on the fly without modifying the shape classes.
2. Flexibility: We can switch rendering methods for a shape without modifying its class.
3. Maintainability: With a clear separation of concerns, the code is easier to maintain. If there's
a change in how a shape is defined or how a rendering method works, it's isolated to that
specific class and doesn't affect others.
4. Reusability: Renderers can be reused across different shapes, and shapes can be rendered
using any renderer, promoting code reusability.
In essence, the Bridge Pattern has transformed a rigid structure into a flexible one, where shapes
and renderers can evolve independently without affecting each other.
📌 Example - 1 Real-life Use Case: Let's consider a scenario where we have different types of
devices (like a TV or a Radio) and different types of remote controls (like a Basic Remote or an
Advanced Remote). The Bridge pattern can be used to decouple the device type from the remote
type.
# Implementor: Device
class Device:
def turn_on(self):
pass
def turn_off(self):
pass
# ConcreteImplementor: TV
class TV(Device):
def turn_on(self):
print("Turning on the TV")
def turn_off(self):
print("Turning off the TV")
# ConcreteImplementor: Radio
class Radio(Device):
def turn_on(self):
print("Turning on the Radio")
def turn_off(self):
print("Turning off the Radio")
# Abstraction: RemoteControl
8
class RemoteControl:
def __init__(self, device):
self.device = device
def operate(self):
pass
# RefinedAbstraction: BasicRemote
class BasicRemote(RemoteControl):
def operate(self):
print("Using Basic Remote:")
self.device.turn_on()
self.device.turn_off()
# RefinedAbstraction: AdvancedRemote
class AdvancedRemote(RemoteControl):
def operate(self):
print("Using Advanced Remote:")
self.device.turn_on()
print("Setting volume to maximum using Advanced Remote")
self.device.turn_off()
# Client Code
tv = TV()
basic_remote = BasicRemote(tv)
basic_remote.operate()
radio = Radio()
advanced_remote = AdvancedRemote(radio)
advanced_remote.operate()
📌 The beauty of this design is that if we introduce a new device or a new type of remote, we
don't need to change the existing classes. We can simply extend the appropriate abstraction or
implementor. This ensures that our code remains open for extension but closed for modification,
adhering to the Open-Closed Principle.
9
📌 Under-the-hood: The Bridge Pattern is essentially about preferring composition over
inheritance. Instead of having a monolithic hierarchy of classes where every combination of
attributes is represented as a class, you break down the attributes into separate hierarchies and
combine them using composition. This leads to a more modular, scalable, and maintainable
design.
Let's break down the provided code and see how it adheres
to the principles and requirements of the Bridge Design
Pattern:
📌 Decoupling Abstraction from Implementation: - Abstraction: The RemoteControl class
serves as the abstraction. It provides a high-level interface ( operate ) for all remote controls.
# Abstraction: RemoteControl
class RemoteControl:
def __init__(self, device):
self.device = device
def operate(self):
pass
Implementation: The Device class is the implementation. It provides a basic interface for
all devices with methods like turn_on and turn_off .
# Implementor: Device
class Device:
def turn_on(self):
pass
def turn_off(self):
pass
In the code, the RemoteControl doesn't directly implement the device operations but instead has
a reference to a Device . This decouples the remote control's operations from the device's
operations.
The code allows both the remote controls and the devices to vary independently. For instance, you
can introduce a new type of remote or a new device without altering the existing classes.
10
📌 Separate Interface for Different Objects: - The Device class provides a separate interface
for devices with methods like turn_on and turn_off . - The RemoteControl class provides a
separate interface for remote controls with the operate method.
This ensures that the responsibilities and operations of remote controls and devices are clearly
separated and can evolve independently.
In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.
📌 Composition Over Inheritance: Instead of inheriting device behavior in the remote control
classes, the behavior is composed using a reference to the Device object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.
In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( RemoteControl and its variants) from their implementations ( Device and its
variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.
In the context of the Bridge Design Pattern, if you want to enforce that every subclass of Device
must provide an implementation for turn_on and turn_off , then you should use the
@abstractmethod decorator.
Here's how you can modify the Device class to use @abstractmethod :
class Device(ABC):
@abstractmethod
def turn_on(self):
pass
@abstractmethod
def turn_off(self):
pass
By doing this, if someone tries to create an instance of a subclass of Device that doesn't
implement turn_on or turn_off , Python will raise a TypeError .
11
📌 However, it's worth noting that the use of @abstractmethod is not strictly necessary for the
Bridge Design Pattern itself. It's more about enforcing a contract for subclasses in Python.
Whether or not to use it depends on the design goals. If you want to ensure that every device has
these methods implemented, then it's a good idea to use it. If you're okay with having default
implementations or just want to provide a template without enforcing its complete
implementation, then you might skip it.
📌 The Bridge Pattern's Role: The Bridge Pattern addresses this problem by separating the
different dimensions of variation, allowing them to evolve independently. Instead of having a class
for every combination, you have a class for every variation in each dimension, and you combine
them using composition.
📌 Example - 2 Real-life Use-case Code: Consider a UI system where you have different types of
controls (like Button, Checkbox) and different themes (like Dark, Light). Without the Bridge Pattern,
you might end up with classes like DarkButton, LightButton, DarkCheckbox, LightCheckbox, and so
on.
Let's see how the Bridge Pattern can prevent this complexity explosion:
# Implementor: Theme
class Theme(ABC):
@abstractmethod
def apply(self, control_name: str) -> str:
pass
class DarkTheme(Theme):
def apply(self, control_name: str) -> str:
return f"{control_name} with Dark Theme"
class LightTheme(Theme):
def apply(self, control_name: str) -> str:
return f"{control_name} with Light Theme"
# Abstraction: UI Control
class UIControl(ABC):
def __init__(self, theme: Theme):
self.theme = theme
12
@abstractmethod
def render(self) -> str:
pass
class Button(UIControl):
def render(self) -> str:
return self.theme.apply("Button")
class Checkbox(UIControl):
def render(self) -> str:
return self.theme.apply("Checkbox")
# Client Code
dark_theme = DarkTheme()
button = Button(dark_theme)
print(button.render()) # Button with Dark Theme
light_theme = LightTheme()
checkbox = Checkbox(light_theme)
print(checkbox.render()) # Checkbox with Light Theme
📌 Explanation of the Code: 1. Theme is our Implementor. It represents the different themes a
UI control can have. 2. DarkTheme and LightTheme are concrete implementations of the Theme.
They provide the actual styling details for the controls. 3. UIControl is our Abstraction. It
represents different types of UI controls. 4. Button and Checkbox are refined abstractions. They
represent specific types of controls. 5. Instead of having a class for every combination (like
DarkButton, LightButton, etc.), we have separate hierarchies for controls and themes. A control
maintains a reference to a theme, and when it's rendered, it uses the theme's styling.
📌 Under-the-hood Theory: The Bridge Pattern's power lies in its ability to decouple two
orthogonal dimensions, allowing them to evolve independently. In our example, the UI controls
and themes are orthogonal; any control can have any theme. By separating them into different
hierarchies and combining them using composition, we avoid the need for a class for every
combination. This is how the Bridge Pattern prevents the cartesian product complexity explosion.
If we were to add a new theme or a new control, we wouldn't need to modify existing classes or
create a multitude of new ones.
📌 Orthogonal Dimensions: In mathematics and computer science, when two things are
described as orthogonal, it means they are independent or unrelated. In the context of design
patterns and software architecture, when we say two dimensions are orthogonal, we mean that
changes in one dimension don't affect or depend on changes in the other dimension.
📌 UI Controls and Themes as Orthogonal Dimensions: Taking the example of UI controls (like
Button, Checkbox) and themes (like Dark, Light): - A UI control's primary purpose is to provide a
specific functionality: a button might initiate an action when clicked, and a checkbox might
represent a binary choice. - A theme, on the other hand, deals with the appearance or styling of
13
controls: dark theme might have a black background with white text, while a light theme might
have a white background with black text.
The functionality of a button (its behavior when clicked) is independent of whether it's styled with
a dark or light theme. Similarly, the specifics of a dark theme's styling are independent of whether
it's applied to a button or a checkbox. This independence is what we mean when we say the UI
controls and themes are orthogonal.
📌 Bridge Pattern and Orthogonality: The Bridge Pattern shines in situations where you have
orthogonal dimensions. Instead of intertwining these dimensions (which would lead to the
cartesian product complexity explosion), the Bridge Pattern keeps them separate, allowing you to
combine any variation of one dimension with any variation of the other.
In our example: - You can apply any theme (Dark, Light, etc.) to any UI control (Button, Checkbox,
etc.) without having a class for every combination. - If you introduce a new theme, you don't have
to modify existing UI control classes. Conversely, if you introduce a new UI control, you don't have
to modify existing theme classes.
📌 Conclusion:
The Bridge Pattern is a powerful tool in the object-oriented design toolkit. It helps manage
complexity by preventing the cartesian product complexity explosion, which can quickly make a
system unmanageable. By understanding and applying this pattern, you can design systems that
are more modular, easier to understand, and simpler to maintain.
Scenario: Imagine you're designing software for keyboards. Keyboards can come in different
layouts (like QWERTY, AZERTY, DVORAK) and can connect to computers using different connection
types (like USB, Bluetooth, or Wireless RF).
Without the Bridge Pattern, you might end up with classes like QWERTYUSBKeyboard ,
AZERTYBluetoothKeyboard , DVORAKWirelessRFKeyboard , and so on. If you have 3 keyboard
layouts and 3 connection types, you'd end up with 3 x 3 = 9 classes.
14
from abc import ABC, abstractmethod
# Implementor: ConnectionType
class ConnectionType(ABC):
@abstractmethod
def connect(self) -> str:
pass
class USBConnection(ConnectionType):
def connect(self) -> str:
return "Connected using USB."
class BluetoothConnection(ConnectionType):
def connect(self) -> str:
return "Connected using Bluetooth."
class WirelessRFConnection(ConnectionType):
def connect(self) -> str:
return "Connected using Wireless RF."
# Abstraction: KeyboardLayout
class KeyboardLayout(ABC):
def __init__(self, connection: ConnectionType):
self.connection = connection
@abstractmethod
def type(self) -> str:
pass
class QWERTYKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on QWERTY layout. {self.connection.connect()}"
class AZERTYKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on AZERTY layout. {self.connection.connect()}"
class DVORAKKeyboard(KeyboardLayout):
def type(self) -> str:
return f"Typing on DVORAK layout. {self.connection.connect()}"
# Client Code
usb = USBConnection()
keyboard1 = QWERTYKeyboard(usb)
print(keyboard1.type()) # Typing on QWERTY layout. Connected using USB.
bluetooth = BluetoothConnection()
keyboard2 = AZERTYKeyboard(bluetooth)
print(keyboard2.type()) # Typing on AZERTY layout. Connected using Bluetooth.
📌 Conclusion: The provided code effectively demonstrates the Bridge Design Pattern's power in
preventing the cartesian product complexity explosion. By recognizing and separating orthogonal
dimensions (keyboard layouts and connection types), we've designed a system that's flexible,
scalable, and maintainable.
Here's how we can model this scenario using the Bridge Design Pattern:
# Implementor: DeviceDriver
class DeviceDriver(ABC):
@abstractmethod
def connect(self):
pass
@abstractmethod
def disconnect(self):
pass
# ConcreteImplementor: PrinterDriver
class PrinterDriver(DeviceDriver):
def connect(self):
print("Connecting to the printer...")
def disconnect(self):
print("Disconnecting from the printer...")
# ConcreteImplementor: ScannerDriver
class ScannerDriver(DeviceDriver):
def connect(self):
16
print("Connecting to the scanner...")
def disconnect(self):
print("Disconnecting from the scanner...")
# Abstraction: OperatingSystem
class OperatingSystem:
def detect_device(self):
print("Detecting device...")
self.driver.connect()
def eject_device(self):
print("Ejecting device...")
self.driver.disconnect()
# RefinedAbstraction: WindowsOS
class WindowsOS(OperatingSystem):
def detect_device(self):
print("Windows OS:")
super().detect_device()
# RefinedAbstraction: MacOS
class MacOS(OperatingSystem):
def detect_device(self):
print("Mac OS:")
super().detect_device()
# Client Code
printer = PrinterDriver()
windows = WindowsOS(printer)
windows.detect_device()
windows.eject_device()
scanner = ScannerDriver()
mac = MacOS(scanner)
mac.detect_device()
mac.eject_device()
# Abstraction: OperatingSystem
class OperatingSystem:
def detect_device(self):
print("Detecting device...")
self.driver.connect()
def eject_device(self):
print("Ejecting device...")
self.driver.disconnect()
# Implementor: DeviceDriver
class DeviceDriver(ABC):
@abstractmethod
def connect(self):
pass
@abstractmethod
def disconnect(self):
pass
In the code, the OperatingSystem doesn't directly implement the device operations but instead
has a reference to a DeviceDriver . This decouples the OS's operations from the device driver's
operations.
# ConcreteImplementor: PrinterDriver
class PrinterDriver(DeviceDriver):
def connect(self):
print("Connecting to the printer...")
def disconnect(self):
print("Disconnecting from the printer...")
# ConcreteImplementor: ScannerDriver
class ScannerDriver(DeviceDriver):
def connect(self):
print("Connecting to the scanner...")
def disconnect(self):
print("Disconnecting from the scanner...")
The code allows both the operating systems and the device drivers to vary independently. For
instance, you can introduce a new type of OS or a new device driver without altering the existing
classes.
# RefinedAbstraction: WindowsOS
class WindowsOS(OperatingSystem):
def detect_device(self):
print("Windows OS:")
super().detect_device()
# RefinedAbstraction: MacOS
class MacOS(OperatingSystem):
def detect_device(self):
print("Mac OS:")
super().detect_device()
📌 Separate Interface for Different Objects: - The DeviceDriver class provides a separate
interface for device drivers with methods like connect and disconnect . - The OperatingSystem
class provides a separate interface for operating systems with methods like detect_device and
eject_device .
This ensures that the responsibilities and operations of operating systems and device drivers are
clearly separated and can evolve independently.
19
📌 Extensibility: The design is extensible in both the abstraction and implementation
dimensions: - If you want to introduce a new type of OS, you can simply create a new class
extending OperatingSystem . - If you want to introduce a new device driver, you can create a new
class implementing the DeviceDriver interface.
In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.
📌 Composition Over Inheritance: Instead of inheriting device driver behavior in the OS classes,
the behavior is composed using a reference to the DeviceDriver object. This promotes flexibility
and reduces the complexity that can arise from deep inheritance hierarchies.
📌 Real-world Analogy: The analogy of an operating system communicating with device drivers is
a fitting real-world example. In actual OS designs, the OS doesn't need to know the specifics of
every device. Instead, it communicates through a standardized interface (like our DeviceDriver ),
and the specifics are handled by the device drivers. This allows new devices to be added without
changing the OS, and new OS versions can be released without changing the drivers.
In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( OperatingSystem and its variants) from their implementations ( DeviceDriver and
its variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.
📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.
📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.
📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.
20
Example 5 - Real life use case of Design Pattern in Python :
world of multimedia players and different file formats
Let's explore an example involving the world of multimedia players and different file formats.
Imagine we're designing a multimedia system where we have different types of players (like a
Video Player or an Audio Player) and different file formats (like MP3, MP4, or WAV). The Bridge
pattern can be used to decouple the player type from the file format.
Here's how we can model this scenario using the Bridge Design Pattern:
# Implementor: MediaFile
class MediaFile(ABC):
@abstractmethod
def play(self):
pass
# ConcreteImplementor: MP3File
class MP3File(MediaFile):
def play(self):
print("Playing MP3 file...")
# ConcreteImplementor: MP4File
class MP4File(MediaFile):
def play(self):
print("Playing MP4 video...")
# ConcreteImplementor: WAVFile
class WAVFile(MediaFile):
def play(self):
print("Playing WAV audio...")
# Abstraction: MediaPlayer
class MediaPlayer:
def play_media(self):
pass
# RefinedAbstraction: VideoPlayer
class VideoPlayer(MediaPlayer):
def play_media(self):
print("Using Video Player:")
self.media_file.play()
# RefinedAbstraction: AudioPlayer
21
class AudioPlayer(MediaPlayer):
def play_media(self):
print("Using Audio Player:")
self.media_file.play()
# Client Code
mp3 = MP3File()
audio_player = AudioPlayer(mp3)
audio_player.play_media()
mp4 = MP4File()
video_player = VideoPlayer(mp4)
video_player.play_media()
wav = WAVFile()
audio_player2 = AudioPlayer(wav)
audio_player2.play_media()
📌 Description of the Example Code: 1. MediaFile is the abstract interface (Implementor) that
all media files must adhere to. It has a method play to play the media. 2. MP3File, MP4File, and
WAVFile are concrete implementations (ConcreteImplementors) of the MediaFile interface. They
provide the actual logic to play their respective file types. 3. MediaPlayer is the abstraction that
communicates with the media files. It provides a method play_media to play the media using a
specific player. 4. VideoPlayer and AudioPlayer are refined abstractions that extend the base
MediaPlayer abstraction. They provide specialized ways to play media based on the player type. 5.
In the client code, we create instances of different media files. We then use different players to
play these files.
📌 This design allows the media player to play any file without knowing the specifics of the file
format. If a new file format is introduced, the player doesn't need to change. Only a new file
format class (conforming to the MediaFile interface) needs to be added. This decoupling is the
essence of the Bridge Design Pattern.
📌 Complexity: This design can be further expanded by adding features like pause , stop , or
rewind to the players and file formats. We can also introduce more refined abstractions like
StreamingPlayer or LocalPlayer and more file formats like AVI or FLAC . The Bridge pattern
will ensure that the design remains modular and maintainable as it grows.
22
# Implementor: MediaFile
class MediaFile(ABC):
@abstractmethod
def play(self):
pass
In the code, the MediaPlayer doesn't directly implement the media playing operations but
instead has a reference to a MediaFile . This decouples the media player's operations from the
media file's operations.
# RefinedAbstraction: VideoPlayer
class VideoPlayer(MediaPlayer):
def play_media(self):
print("Using Video Player:")
self.media_file.play()
# RefinedAbstraction: AudioPlayer
class AudioPlayer(MediaPlayer):
def play_media(self):
print("Using Audio Player:")
self.media_file.play()
# ConcreteImplementor: MP3File
class MP3File(MediaFile):
def play(self):
print("Playing MP3 file...")
# ConcreteImplementor: MP4File
class MP4File(MediaFile):
def play(self):
print("Playing MP4 video...")
# ConcreteImplementor: WAVFile
class WAVFile(MediaFile):
def play(self):
23
print("Playing WAV audio...")
The code allows both the media players and the media file formats to vary independently. For
instance, you can introduce a new type of player or a new file format without altering the existing
classes.
📌 Separate Interface for Different Objects: - The MediaFile class provides a separate
interface for media file formats with a method like play .
The MediaPlayer class provides a separate interface for media players with a method like
play_media .
# Implementor: MediaFile
class MediaFile(ABC):
@abstractmethod
def play(self):
pass
# Abstraction: MediaPlayer
class MediaPlayer:
def play_media(self):
pass
This ensures that the responsibilities and operations of media players and media file formats are
clearly separated and can evolve independently.
In both cases, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.
📌 Composition Over Inheritance: Instead of inheriting media file behavior in the media player
classes, the behavior is composed using a reference to the MediaFile object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.
📌 Real-world Analogy: The analogy of media players and file formats is a fitting real-world
example. In actual multimedia systems, a player doesn't need to know the specifics of every file
format. Instead, it communicates through a standardized interface (like our MediaFile ), and the
specifics are handled by the file format implementations. This allows new file formats to be added
24
without changing the player, and new player versions can be released without changing the file
formats.
In summary, the provided code effectively demonstrates the Bridge Design Pattern by decoupling
abstractions ( MediaPlayer and its variants) from their implementations ( MediaFile and its
variants), allowing them to vary independently, promoting composition over inheritance, and
ensuring extensibility in both dimensions.
Here's how we can model this scenario using the Bridge Design Pattern:
# Implementor: Renderer
class Renderer(ABC):
@abstractmethod
def render(self, shape_name: str):
pass
# ConcreteImplementor: OpenGLRenderer
class OpenGLRenderer(Renderer):
# ConcreteImplementor: DirectXRenderer
class DirectXRenderer(Renderer):
# Abstraction: Shape
class Shape(ABC):
@abstractmethod
def draw(self):
pass
# RefinedAbstraction: Circle
class Circle(Shape):
def draw(self):
self.renderer.render("Circle")
25
# RefinedAbstraction: Rectangle
class Rectangle(Shape):
def draw(self):
self.renderer.render("Rectangle")
# Client Code
opengl = OpenGLRenderer()
circle = Circle(opengl)
circle.draw()
directx = DirectXRenderer()
rectangle = Rectangle(directx)
rectangle.draw()
circle2 = Circle(directx)
circle2.draw()
📌 Description of the Example Code: 1. Renderer is the abstract interface (Implementor) that all
rendering engines must adhere to. It has a method render to render a given shape. 2.
OpenGLRenderer and DirectXRenderer are concrete implementations (ConcreteImplementors)
of the Renderer interface. They provide the actual logic to render shapes using their respective
graphics engines. 3. Shape is the abstraction that communicates with the rendering engines. It
provides a method draw to draw the shape using a specific renderer. 4. Circle and Rectangle are
refined abstractions that extend the base Shape abstraction. They provide specialized ways to
draw themselves. 5. In the client code, we create instances of different rendering engines. We then
use different shapes to draw themselves using these engines.
📌 This design allows the shape to be drawn using any rendering engine without knowing the
specifics of the rendering method. If a new rendering engine or a new shape is introduced, the
existing classes don't need to change. This decoupling is the essence of the Bridge Design Pattern.
📌 Complexity: This design can be further expanded by adding features like shading, coloring, or
transformations to the shapes and rendering engines. We can also introduce more refined
abstractions like Triangle or Polygon and more rendering engines like Vulkan . The Bridge
pattern will ensure that the design remains modular and maintainable as it grows.
26
# Abstraction: Shape
class Shape(ABC):
@abstractmethod
def draw(self):
pass
# Implementor: Renderer
class Renderer(ABC):
@abstractmethod
def render(self, shape_name: str):
pass
In the code, the Shape doesn't directly handle the specifics of rendering. Instead, it delegates this
responsibility to the Renderer . This ensures that the shape's drawing operations are decoupled
from the specifics of the rendering operations.
# RefinedAbstraction: Circle
class Circle(Shape):
def draw(self):
self.renderer.render("Circle")
# RefinedAbstraction: Rectangle
class Rectangle(Shape):
def draw(self):
self.renderer.render("Rectangle")
27
# ConcreteImplementor: OpenGLRenderer
class OpenGLRenderer(Renderer):
# ConcreteImplementor: DirectXRenderer
class DirectXRenderer(Renderer):
The design allows both the shapes and the rendering engines to evolve independently. For
instance, if we want to introduce a new type of shape or a new rendering engine, we can do so
without altering the existing classes.
# Implementor: Renderer
class Renderer(ABC):
@abstractmethod
def render(self, shape_name: str):
pass
# RefinedAbstraction: Rectangle
class Rectangle(Shape):
def draw(self):
self.renderer.render("Rectangle")
📌 Separate Interface for Different Objects: - The Renderer class provides a distinct interface
for rendering engines with methods like render . - The Shape class offers a separate interface for
shapes with methods like draw .
This ensures that the responsibilities and operations of shapes and rendering engines are clearly
separated, allowing each to evolve independently.
In both scenarios, you don't need to modify the existing classes, adhering to the Open-Closed
Principle.
📌 Composition Over Inheritance: Rather than inheriting rendering behavior in the shape
classes, the behavior is composed using a reference to the Renderer object. This promotes
flexibility and reduces the complexity that can arise from deep inheritance hierarchies.
28
📌 Real-world Analogy: In real-world graphics systems, a shape doesn't need to know the
specifics of every rendering engine. Instead, it communicates through a standardized interface
(like our Renderer ), and the specifics are handled by the rendering engine implementations. This
allows new rendering engines to be added without changing the shape, and new shape types can
be introduced without changing the rendering engines.
In conclusion, the provided code effectively embodies the Bridge Design Pattern by decoupling
abstractions ( Shape and its variants) from their implementations ( Renderer and its variants),
allowing them to vary independently, promoting composition over inheritance, and ensuring
extensibility in both dimensions.
It's a behavioral design pattern that lets you reduce chaotic dependencies between objects. The
pattern restricts direct communications between the objects and forces them to collaborate only
via a mediator object.
📌 At its core, the Mediator Design Pattern is about promoting loose coupling between objects.
The mediator knows about each colleague object and facilitates the interaction between them.
📌 Why Use the Mediator Pattern?: - It simplifies object communication. Objects no longer need
to know the details of communication with other objects. - It centralizes external communications.
If you need to change the way objects talk to each other, you only have to update the mediator. - It
promotes a single responsibility principle. Objects focus on their own logic, while the mediator
takes care of communication logic.
📌 Real-life Analogy: Think of an air traffic control tower at an airport. Planes don't communicate
directly with each other to decide who lands first or which runway to use. Instead, they
communicate with the control tower, which ensures that planes land and take off without any
incidents.
29
📌 Use Cases: - Chat applications where the server (mediator) handles messages and broadcasts
them to clients. - GUI where buttons and actions are coordinated through a central controller. -
Workflow systems where tasks are orchestrated through a central process.
class Alarm:
def start(self):
print("Alarm is set!")
coffee_machine = CoffeeMachine()
coffee_machine.start()
class CoffeeMachine:
def start(self):
print("Coffee is brewing!")
curtains = Curtains()
curtains.open()
class Curtains:
def open(self):
print("Curtains are opened!")
alarm = Alarm()
alarm.start()
You'll get:
Alarm is set!
Coffee is brewing!
Curtains are opened!
Violation of Single Responsibility Principle: Each class is doing more than one thing. For
instance, the Alarm class is also responsible for starting the coffee machine.
class Mediator:
def __init__(self):
self.alarm = Alarm(self)
self.coffee_machine = CoffeeMachine(self)
30
self.curtains = Curtains(self)
def trigger_alarm(self):
self.alarm.start()
self.coffee_machine.start()
def brew_coffee(self):
self.coffee_machine.start()
self.curtains.open()
class Alarm:
def __init__(self, mediator):
self.mediator = mediator
def start(self):
print("Alarm is set!")
self.mediator.brew_coffee()
class CoffeeMachine:
def __init__(self, mediator):
self.mediator = mediator
def start(self):
print("Coffee is brewing!")
self.mediator.curtains.open()
class Curtains:
def __init__(self, mediator):
self.mediator = mediator
def open(self):
print("Curtains are opened!")
mediator = Mediator()
mediator.trigger_alarm()
Alarm is set!
Coffee is brewing!
Curtains are opened!
📌 Benefits of this refactored code: - Loose Coupling: The classes are now decoupled from each
other. They only communicate through the mediator, making the system more flexible and
maintainable. - Single Responsibility Principle: Each class now has a single responsibility. The
Alarm class, for instance, only deals with the alarm and not the coffee machine.
📌 Under-the-hood:
31
The Mediator Design Pattern promotes the use of a single object to handle communication
between different classes. This central object is known as the mediator. By doing so, it decouples
the classes, leading to a system where components are easier to understand, maintain, and
extend. The mediator becomes the only component that needs to know about the internals of
other classes, ensuring that changes in one class don't affect others. This pattern is particularly
useful in scenarios where multiple classes are interdependent, leading to a complex web of
relationships. By centralizing external communications, the Mediator pattern prevents this web of
relationships, making the system more modular.
Let's delve into the details of how the refactored code using
the Mediator Design Pattern addresses the issues present in
the original code.
📌 Issue 1: Tight Coupling
In the original code, each class was directly dependent on another class. For instance: - The Alarm
class directly instantiated and called a method on the CoffeeMachine class. - The CoffeeMachine
class directly instantiated and called a method on the Curtains class.
This tight coupling means that if you wanted to change the behavior of one class, it could
potentially affect the others. For example, if you wanted the CoffeeMachine to not interact with
the Curtains , you'd have to modify the CoffeeMachine class directly, which is not ideal.
For instance: - When the Alarm is started, it informs the mediator to brew coffee. It doesn't need
to know about the CoffeeMachine class directly. - The CoffeeMachine , when started, informs the
mediator to open the curtains. Again, it doesn't directly know about the Curtains class.
This decouples the classes from each other, making the system more flexible. If you now wanted
to change the behavior of the CoffeeMachine , you'd only need to modify the mediator's behavior,
leaving the CoffeeMachine class untouched.
This mixing of responsibilities makes the code harder to maintain and understand.
32
The responsibility of coordinating these actions is now with the mediator. This ensures that each
class adheres to the Single Responsibility Principle, making the system more modular and easier
to maintain.
📌 Conclusion:
The Mediator Design Pattern effectively addresses the issues of tight coupling and violation of the
Single Responsibility Principle by centralizing the communication between classes. This not only
makes the system more maintainable and flexible but also ensures that each class remains
focused on its primary responsibility.
class ChatRoomMediator:
def __init__(self):
self.users = []
class User:
def __init__(self, name, chat_mediator):
self.name = name
self.chat_mediator = chat_mediator
chat_mediator.add_user(self)
# Usage
mediator = ChatRoomMediator()
john = User("John", mediator)
alice = User("Alice", mediator)
john.send("Hello, Alice!")
alice.send("Hey, John!")
📌 Code Explanation: - ChatRoomMediator is the mediator class. It keeps track of all users in the
chat room and handles message broadcasting. - User class represents individual users. Each user
has a reference to the mediator to send messages. - When a user sends a message, it's passed to
the mediator. The mediator then broadcasts the message to all other users. - In the usage
33
example, John sends a message to Alice through the mediator, and Alice responds back. The
mediator ensures that the message is delivered without John and Alice communicating directly.
📌 Under-the-hood: When you look at the Mediator pattern, it might seem like an overhead. Why
introduce an additional layer when objects can communicate directly? But the beauty lies in the
separation of concerns. The mediator encapsulates the communication logic, making the system
more modular. If you need to change the way communication happens, you only touch the
mediator, not every individual object. This is in line with the Open/Closed Principle: software
entities should be open for extension but closed for modification.
📌 Final Thoughts: The Mediator pattern is powerful when you have a set of objects that need to
communicate in complex ways, but you want to avoid tight coupling. By centralizing external
communications, you make the system more maintainable and adaptable to change.
Scenario:
A GUI has a TextBox where users type a message, a Button they click to submit the message,
and a Label that displays the most recent message. Without the Mediator, each widget would
need to know about the others, leading to tight coupling.
class Button:
def __init__(self, text_box, label):
self.text_box = text_box
self.label = label
def click(self):
self.label.display(self.text_box.text)
class TextBox:
def __init__(self):
self.text = ""
class Label:
def __init__(self):
self.displayed_text = ""
# Usage:
34
textbox = TextBox()
label = Label()
button = Button(textbox, label)
textbox.set_text("Hello from the GUI!")
button.click() # Outputs: "Label: Hello from the GUI!"
📌 Issues: - Button is tightly coupled with both TextBox and Label . - If we introduce another
widget, say LogBox to log all messages, then Button will need to change. - Widgets are not
focused on their single responsibility; they are also aware of other widget's behaviors.
# Concrete Mediator
class GuiMediator(Mediator):
def __init__(self, text_box, button, label):
self.text_box = text_box
self.button = button
self.label = label
self.text_box.mediator = self
self.button.mediator = self
self.label.mediator = self
class Button:
def __init__(self):
self.mediator = None
def click(self):
self.mediator.notify(self, "click")
class TextBox:
def __init__(self):
self.text = ""
self.mediator = None
class Label:
def __init__(self):
self.displayed_text = ""
self.mediator = None
35
def display(self, text):
self.displayed_text = text
print(f"Label: {self.displayed_text}")
# Usage:
textbox = TextBox()
button = Button()
label = Label()
mediator = GuiMediator(textbox, button, label)
textbox.set_text("Hello using Mediator!")
button.click() # Outputs: "Label: Hello using Mediator!"
📌 Benefits:
📌 The Mediator Pattern works as a centralized communication hub for various objects, ensuring
that these objects do not communicate with each other directly. Let's analyze how this refactored
code solves the initial problems:
1. Decoupling of Components:
In the original code, the Button was directly coupled with both the TextBox and
Label . This means the Button had to be aware of the implementations of both these
classes, and any changes to them would potentially affect the Button .
With the Mediator pattern in place, the Button , TextBox , and Label no longer
communicate with each other directly. Instead, they are aware of the mediator, and they
notify the mediator when something of interest happens. This drastically reduces the
dependencies between the individual components.
In our refactored code, the Button only notifies the mediator about its click event. It
doesn't need to know what happens next or which components need to be informed.
2. Ease of Extensibility:
Consider we want to introduce a new widget, say a LogBox , to log every message
submitted. In the non-mediator design, this would necessitate changes in the Button
class (or wherever the central logic of handling a button click resides).
With the mediator in place, introducing such a widget would only involve changes within
the mediator, making the system more maintainable. Components remain unchanged,
preserving the Open/Closed Principle (i.e., a module should be open for extension but
closed for modification).
In the original design, the Button had multiple responsibilities. It had its own behavior
(being clicked) and also controlled how other objects (like Label ) responded.
The refactored code respects SRP more closely. Each component is responsible for its
own behavior. The logic of how components interact or how events are handled is
moved to the mediator. This separation ensures that if there's a bug in the interaction
logic, you'd look into the mediator, and if there's an issue with how a button works,
you'd inspect the Button class.
2. Debugging becomes easier. Since all interactions pass through the mediator,
logging or monitoring inter-component communications can be done at a single
place.
5. Flexibility in Event Handling:
In the initial design, the response to an event (like the button click) was hardcoded. In
the refactored design, the mediator can be programmed to handle various events in
flexible ways. For instance, it might decide to not update the Label if the TextBox is
empty, without requiring any changes to either the Button or the Label .
📌 In conclusion, the Mediator pattern in the refactored code promotes a more maintainable,
scalable, and decoupled design. It offers a clear separation of concerns and encapsulates the
interaction logic between components within the mediator, making it easier to modify, extend,
and manage, especially in large-scale software systems.
Let's see yet another example WITH and then WITHOUT the
"Mediator Design pattern in Python`"
📌 Let's consider an e-commerce application where different components, like the Cart , User ,
and Inventory , interact with each other:
Scenario:
When a User adds an item to the Cart , the Inventory should be updated to reflect that the
item count has decreased. Similarly, when the User removes an item from the Cart , the
Inventory should update to increase the item count. Without the Mediator, the User would
need to know about both the Cart and the Inventory .
class Cart:
def __init__(self, inventory):
self.items = {}
self.inventory = inventory
class Inventory:
def __init__(self):
self.items_count = {}
37
def set_item_count(self, item_name, count):
self.items_count[item_name] = count
class User:
def __init__(self, cart):
self.cart = cart
# Usage:
inventory = Inventory()
inventory.set_item_count("book", 10)
cart = Cart(inventory)
user = User(cart)
user.purchase_item("book", 2)
print(inventory.items_count) # Outputs: {"book": 8}
user.return_item("book", 1)
print(inventory.items_count) # Outputs: {"book": 9}
📌 Issues: - User and Cart are both tightly coupled to Inventory . Any changes in how
Inventory operates might necessitate changes in User and Cart . - Extensibility is a concern.
Suppose we want to introduce features like promotional offers or logging. Integrating these would
require modifications in multiple places. - There's a lack of a clear separation of concerns. If we
want to change how items are added to the cart or how inventory updates, we'd have to dig into
both Cart and Inventory .
# Concrete Mediator
class ECommerceMediator(Mediator):
def __init__(self, cart, user, inventory):
self.cart = cart
self.user = user
38
self.inventory = inventory
self.cart.mediator = self
self.user.mediator = self
self.inventory.mediator = self
class Cart:
def __init__(self):
self.items = {}
self.mediator = None
# Rest remains the same but without direct calls to Inventory
class User:
def __init__(self):
self.mediator = None
Introducing new features or changing business rules is centralized in the mediator, leading to
cleaner, more manageable code.
Solution with Mediator: With the introduction of the ECommerceMediator , the individual
components ( User , Cart , Inventory ) don't communicate directly with each other. Instead, they
communicate via the mediator. This reduces the tight coupling between them. If, for instance, the
mechanism for updating the inventory changes, the mediator will be the only component needing
39
adjustment, thus isolating the change and preventing ripple effects through other parts of the
code.
📌 Extensibility Concern: Original Issue: The direct dependencies between classes made it hard
to introduce new features. For example, integrating a new feature like promotional offers would
require modifications in multiple classes.
Solution with Mediator: The mediator allows for a more centralized approach to handle
communications. If a new feature needs to be added, like handling promotional offers when a
user purchases an item, the logic can be added primarily within the mediator, without heavily
modifying the existing classes.
📌 Separation of Concerns: Original Issue: The initial design did not clearly separate concerns.
The responsibility of updating the inventory was mixed into both the Cart and Inventory .
Solution with Mediator: By introducing the mediator, responsibilities became clearer. The Cart
is now mainly concerned with managing items within itself, the Inventory deals with stock
counts, and the mediator ensures that actions in one component lead to appropriate reactions in
others. The mediator takes on the responsibility of orchestrating the interactions, thus providing a
clear separation of concerns.
📌 Centralization of Logic: Original Issue: The communication logic was spread out. When a
user wanted to purchase an item, the logic to add the item to the cart and update the inventory
was in multiple places.
Solution with Mediator: All communication logic is centralized in the mediator. When a user
wishes to purchase an item, they notify the mediator, which then instructs both the Cart and
Inventory on what actions to take. This not only makes the flow of logic clearer but also
simplifies potential future changes.
📌 Flexibility: Original Issue: If we needed to change how components interacted, we'd likely
need to modify multiple classes.
Solution with Mediator: With the mediator in place, changes to interactions primarily occur
within the mediator class. For instance, if we wanted to introduce a logging mechanism every time
the inventory changed, we'd implement it within the mediator, without needing to touch the
actual Inventory class.
In summary, the mediator pattern provides a way to reduce dependencies between classes,
ensuring that each class adheres to the single responsibility principle, thus making the system
more maintainable and flexible. It offers a clean and centralized way to manage interactions,
making the system easier to understand and extend.
Let's see yet another example WITH and then WITHOUT the
"Mediator Design pattern in Python`"
📌 Scenario: Consider a Home Automation System where various devices (components) like
lights, thermostats, and security systems need to communicate with each other. Without a
mediator, every device would have direct references to other devices, leading to a tangled web of
dependencies.
40
Without the Mediator Pattern:
class Light:
def __init__(self):
self.is_on = False
def turn_on(self):
self.is_on = True
print("Light turned on.")
def turn_off(self):
self.is_on = False
print("Light turned off.")
class Thermostat:
def __init__(self):
self.temperature = 20 # default temperature in Celsius
class SecuritySystem:
def __init__(self):
self.is_armed = False
def arm(self):
self.is_armed = True
light.turn_off() # Direct reference to a light instance
thermostat.increase_temperature(5) # Direct reference to a thermostat
instance
print("Security system armed.")
def disarm(self):
self.is_armed = False
light.turn_on() # Direct reference to a light instance
thermostat.decrease_temperature(5) # Direct reference to a thermostat
instance
print("Security system disarmed.")
light = Light()
thermostat = Thermostat()
security_system = SecuritySystem()
41
Refactoring using the Mediator Pattern:
class HomeMediator:
def __init__(self):
self.light = Light(self)
self.thermostat = Thermostat(self)
self.security_system = SecuritySystem(self)
def turn_on_light(self):
self.light.turn_on()
def turn_off_light(self):
self.light.turn_off()
def arm_security(self):
self.security_system.arm()
def disarm_security(self):
self.security_system.disarm()
class Light:
def __init__(self, mediator):
self.mediator = mediator
self.is_on = False
def turn_on(self):
self.is_on = True
print("Light turned on.")
def turn_off(self):
self.is_on = False
print("Light turned off.")
class Thermostat:
def __init__(self, mediator):
self.mediator = mediator
self.temperature = 20
def arm(self):
self.is_armed = True
self.mediator.turn_off_light()
self.mediator.increase_temperature(5)
print("Security system armed.")
def disarm(self):
self.is_armed = False
self.mediator.turn_on_light()
self.mediator.decrease_temperature(5)
print("Security system disarmed.")
home_mediator = HomeMediator()
📌 Note: By using the Mediator pattern, the classes Light , Thermostat , and SecuritySystem
are decoupled from each other. Each device communicates through the HomeMediator , which
centralizes the interactions.
📌 Centralized Control: With the introduction of the mediator, all interactions among objects are
centralized within HomeMediator . If there's a need to modify how devices communicate or change
the logic of one device based on the state of another, it only needs to be updated within the
mediator, rather than scattered across multiple classes.
📌 Simplifying Maintenance: As the system evolves and grows, perhaps with the addition of
more devices and functionalities, it's significantly easier to manage and maintain this centralized
approach. For example, introducing a new device means adding it to the HomeMediator without
needing to modify existing devices. In contrast, in the non-mediator approach, adding or removing
a component could necessitate changes in several other components due to direct references.
📌 Enhanced Reusability: With the mediator pattern, each class like Light , Thermostat , or
SecuritySystem can function independently without the need for direct references to other
classes. This means that these classes are now more reusable. They can be used in another
system or context without dragging along other unrelated components.
43
📌 Single Responsibility Principle: Each class now focuses purely on its own functionalities
without concerning itself with the broader system's orchestration logic. This adherence to the
Single Responsibility Principle ensures that each class has one reason to change, making the
system more robust and easier to understand.
📌 Scalability: The mediator approach is scalable. As more components are added to the home
automation system, the design remains consistent. Components still only need to be aware of the
mediator. Without the mediator, the complexity would grow exponentially, as each new
component could potentially require updates in every other component it interacts with.
📌 Easier Testing: With the decoupling achieved using the Mediator pattern, it becomes easier to
write unit tests for each individual component without having to mock or set up many other
components. Each component can be tested in isolation with only the mediator being mocked.
In summary, by employing the Mediator Design Pattern, the home automation system's
architecture becomes cleaner, more maintainable, scalable, and easier to test. It effectively
resolves the issues associated with the tight coupling of components seen in the initial approach.
📌 The Prototype Design Pattern is a creational design pattern that allows an object to create a
copy of itself. This pattern is particularly useful when the creation of an object is more costly than
copying an existing object.
The prototype pattern is useful when you need to create objects based on an existing object using
the cloning technique. As you may have guessed, the idea is to use a copy of that object's
complete structure to produce the new object. We will see that this is almost natural in Python
because we have a copy feature that helps greatly in using this technique. In the general case of
creating a copy of an object, what happens is that you make a new reference to the same object, a
method called a shallow copy. But if you need to duplicate the object, which is the case with a
prototype, you make a deep copy.
44
📌 In Python, we have two ways to copy objects:
Shallow Copy: It creates a new object, but does not create copies of the objects that the
original object references. Instead, it just copies the references.
Deep Copy: It creates a new object and also recursively copies all the objects referenced by
the original object.
import copy
class Prototype:
def __init__(self):
self._objects = {}
class Car:
def __init__(self):
self.name = "Skylark"
self.color = "Red"
self.options = "Ex"
def __str__(self):
return f"{self.name} | {self.color} | {self.options}"
c = Car()
prototype = Prototype()
prototype.register_object("skylark", c)
45
c1 = prototype.clone("skylark")
print(c1)
c2 = prototype.clone("skylark", color="Blue")
print(c2)
Skylark | Red | Ex
Skylark | Blue | Ex
📌 In the above code, we have a Prototype class that can register, unregister, and clone objects.
The Car class is a simple class with some attributes. We then create an instance of the Car class,
register it with the prototype, and then clone it with optional attribute modifications.
📌 The __dict__.update(attrs) method is used to update the attributes of the cloned object.
The __dict__ attribute of an object is a dictionary containing the object's writable attributes.
📌 The copy.deepcopy method is used to create a deep copy of the registered object. This
ensures that the cloned object is independent of the original object.
The Prototype class maintains a private dictionary _objects that acts as a registry of
objects that can be cloned.
The register_object method allows you to register an object with a unique name,
making it available for cloning later.
The unregister_object method allows you to remove an object from the registry.
2. Cloning Mechanism:
The clone method is the heart of the Prototype class. It takes the name of the
registered object you want to clone and any additional attributes you want to modify in
the cloned object.
46
It uses copy.deepcopy to create a deep copy of the registered object. This ensures that
the cloned object is entirely independent of the original.
After cloning, it updates the attributes of the cloned object using the
obj.__dict__.update(attrs) method.
The Prototype class abstracts the process of object cloning. Instead of manually using
copy.deepcopy every time you want to clone an object, you can use the clone method,
which also provides additional functionality like attribute updates.
2. Flexibility:
The design allows you to easily register and unregister objects, making it flexible to
manage which objects are available for cloning at any given time.
3. Consistency:
By centralizing the cloning process in the Prototype class, you ensure consistent
behavior. Every cloned object is guaranteed to be a deep copy, ensuring no unintended
side effects from shared references.
4. Efficiency:
The Prototype class simplifies the process of cloning objects. You don't need to
remember the intricacies of deep copying or attribute updating -- the class handles it for
you.
2. Scalability:
As your application grows, you might have multiple objects that need to be cloned with
slight variations. The Prototype class provides a scalable solution to manage and clone
these objects systematically.
3. Design Principles:
In essence, the Prototype class provides a structured and efficient way to implement the
Prototype design pattern in Python. It abstracts the cloning process, ensuring consistency and
flexibility, making it easier for developers to leverage the benefits of object cloning in their
applications.
47
Let's see one example WITH and then WITHOUT Prototype
Design Pattern in Python.
📌 Without Prototype Design Pattern:
Consider a scenario where we have a class ComplexObject that takes a significant amount of time
to instantiate. This could be because it fetches data from a database, computes some values, or
any other costly operation.
class ComplexObject:
def __init__(self, data):
# Simulating a costly operation
import time
time.sleep(2)
self.data = data
def display(self):
print(self.data)
obj1.display()
obj2.display()
2. If we want to create a new object that's similar to an existing object but with some minor
changes, we still have to go through the costly instantiation process.
import copy
class ComplexObject:
def __init__(self, data):
# Simulating a costly operation
import time
time.sleep(2)
self.data = data
def display(self):
print(self.data)
# Creating an object
obj1 = ComplexObject("Data for Object 1")
obj1.display()
2. It's easier to produce variations of an object since we can clone and then modify.
In the refactored code, the clone method in the ComplexObject class creates a deep copy of the
current object and allows for modifications if needed. This way, we can quickly produce new
objects based on existing ones without going through the costly instantiation process again.
Let's delve into the details of how the refactored code with
the Prototype Design Pattern addresses the issues in the
original code.
📌 Issue 1: Every time we create a new instance of ComplexObject , we have to wait for the costly
operation to complete.
Solution with Prototype Design Pattern: In the refactored code, the clone method is
introduced in the ComplexObject class. This method uses the deepcopy function from the copy
module to create a new instance of the object. The crucial point here is that when we use the
clone method, we bypass the __init__ method of the ComplexObject class. This means that
the costly operation (in our example, the time.sleep(2) ) is not executed when creating a new
object using the clone method.
So, if we have an existing instance of ComplexObject and we want to create another similar
object, we can use the clone method instead of instantiating a new object from scratch. This way,
we avoid the costly operation and save time.
📌 Issue 2: If we want to create a new object that's similar to an existing object but with some
minor changes, we still have to go through the costly instantiation process.
Solution with Prototype Design Pattern: The clone method not only creates a new instance of
the object but also allows for modifications to the cloned object. In our refactored code, the
clone method accepts an optional data parameter. If provided, this new data replaces the data
attribute of the cloned object.
49
This flexibility means that we can quickly produce variations of an existing object. For instance, if
we have an object with the data "Data for Object 1" and we want another object with the data
"Data for Object 2", we don't need to create a new object from scratch. Instead, we can clone the
existing object and modify its data attribute, all while bypassing the costly instantiation process.
📌 Summary: The Prototype Design Pattern, as implemented in the refactored code, provides an
efficient way to create new objects based on existing ones without repeatedly undergoing costly
initialization processes. This not only enhances performance but also offers a more flexible
approach to object creation, especially when objects have minor variations.
In this scenario, the Prototype pattern can be used to clone the character template and then apply
the modifications.
class GameCharacter:
def __init__(self, health=100, strength=10, abilities=[]):
self.health = health
self.strength = strength
self.abilities = abilities.copy()
def __str__(self):
return f"Health: {self.health}, Strength: {self.strength}, Abilities: {',
'.join(self.abilities)}"
📌 In this example, we first create a GameCharacter class that represents a character in the
game. We then create a warrior template with specific attributes. Using the Prototype pattern, we
can clone this template to create an archer character with some modifications.
📌 The deep copy ensures that the abilities list of the cloned object is independent of the
original object. This means modifying the abilities of the archer won't affect the warrior template.
50
Under-the-hood:
📌 Using the Prototype pattern can be more efficient than creating new instances from scratch,
especially when the instantiation process is resource-intensive or involves database operations.
In conclusion, the Prototype Design Pattern provides a mechanism to clone objects, ensuring that
the new object is independent of the original. This pattern is particularly useful in scenarios where
object creation is costly, and it's more efficient to copy an existing instance.
Let's delve into a scenario where the instantiation process is resource-intensive, and see how the
Prototype pattern can offer a more efficient solution.
However, in many cases, you might need to create a new user profile based on an existing one,
with only a few modifications. Instead of fetching all the data again from the database, you can
clone the existing profile and make the necessary changes.
Code:
Let's see how this can be implemented:
import copy
import time
class Database:
51
def fetch_user_profile(self, user_id):
# Simulate a time-consuming database operation
time.sleep(2)
return {
"user_id": user_id,
"name": "John Doe",
"preferences": ["reading", "traveling"],
"transaction_history": ["purchase1", "purchase2"]
}
class UserProfile:
def __init__(self, user_id):
db = Database()
user_data = db.fetch_user_profile(user_id)
self.user_id = user_data["user_id"]
self.name = user_data["name"]
self.preferences = user_data["preferences"]
self.transaction_history = user_data["transaction_history"]
def __str__(self):
return f"UserID: {self.user_id}, Name: {self.name}, Preferences:
{self.preferences}, Transaction History: {self.transaction_history}"
class Prototype:
def __init__(self):
self._objects = {}
📌 In the above code, the Database class simulates a time-consuming operation to fetch a user
profile. The UserProfile class fetches the user profile from the database upon instantiation.
📌 The first time we create a UserProfile object, it fetches the data from the database, which
takes time. However, when we need a new user profile based on the existing one, we use the
Prototype class to clone the original profile and make the necessary modifications. This avoids
the need to hit the database again, saving time and resources.
52
📌 The deep copy ensures that the cloned user profile is independent of the original. This means
that changes to the preferences or transaction_history of the new user won't affect the
original user.
In this scenario, the Prototype pattern provides a more efficient way to create new user profiles
based on existing ones without repeatedly incurring the cost of database operations.
The Problem:
When you have a system that deals with user profiles, there are often scenarios where you need
to create a new profile that is very similar to an existing one. For instance, you might want to
create a profile for a new user that shares many attributes with an existing user, but with a few
changes.
If you were to approach this without the Prototype pattern, you might end up fetching the entire
profile from the database, making the necessary modifications, and then saving it back. This
approach has a few drawbacks:
1. Database Overhead: Fetching data from a database, especially if it's a remote database, can
be time-consuming and resource-intensive. If you're doing this repeatedly, it can slow down
your application and put unnecessary load on the database.
2. Data Consistency: Every time you fetch data from the database, there's a chance (however
small) that the data might have changed since the last fetch. This can lead to inconsistencies.
1. Initial Fetch: The first time we create a UserProfile object ( original_user ), it fetches the
data from the database. This is done using the Database class's fetch_user_profile
method.
class UserProfile:
def __init__(self, user_id):
db = Database()
user_data = db.fetch_user_profile(user_id)
original_user = UserProfile(1)
2. Registering the Prototype: Once we have the original_user profile, we register it with the
Prototype class. This means we're telling the Prototype class, "Hey, this is a profile I might
want to clone in the future."
53
# Register the original user profile with the prototype
prototype = Prototype()
prototype.register_object("user1", original_user)
3. Cloning the Prototype: When we need a new profile based on the original_user , instead
of going back to the database, we ask the Prototype class to give us a clone of the
original_user . This is done using the clone method of the Prototype class. This method
creates a deep copy of the original_user , ensuring it's a completely independent object.
4. Modifying the Clone: After cloning, we can make any necessary modifications to the new
profile. In the code, we change the user_id and name attributes of the cloned profile.
Benefits:
1. Efficiency: Since we're not hitting the database for every new profile, we save on the time
and resources that would be used in database operations.
2. Consistency: Since we're working with a clone of the original profile, we ensure that the base
data is consistent across all cloned profiles.
3. Flexibility: The Prototype pattern allows us to easily create variations of the original profile.
We can make as many clones as we want and modify each one independently.
In essence, by using the Prototype pattern in this scenario, we're optimizing our system by
reducing the number of database calls and ensuring consistent and flexible user profile creation.
Scenario:
Imagine a scenario where we have a factory that produces DataFrames for different departments
in a company. Each department's DataFrame has a standard structure, but the data varies. Instead
of creating a new DataFrame from scratch for each department, we can use a prototype of a
standard DataFrame and clone it for each department, filling in the specific data.
Code Example:
import pandas as pd
import copy
class DataFrameFactory:
def __init__(self, prototype_df):
self.prototype_df = prototype_df
factory = DataFrameFactory(prototype_df)
finance_data = {
'Name': ['Charlie', 'David'],
'Role': ['Finance Manager', 'Accountant'],
'Salary': [75000, 52000]
}
hr_df = factory.create_dataframe(hr_data)
finance_df = factory.create_dataframe(finance_data)
print("HR DataFrame:")
print(hr_df)
print("\nFinance DataFrame:")
print(finance_df)
Explanation:
📌 We start by defining a DataFrameFactory class. This class will be responsible for producing
DataFrames based on a prototype.
📌 The DataFrameFactory class has a method create_dataframe that takes data as input. It
clones the prototype DataFrame and fills it with the provided data.
📌 Using the factory, we create DataFrames for the HR and Finance departments by providing
their specific data.
📌 The Prototype design pattern is evident here in the way we use a prototype DataFrame and
clone it for different departments, rather than creating a new DataFrame from scratch each time.
55
This approach ensures that all department DataFrames have a consistent structure, as defined by
the prototype, while allowing for flexibility in the data they contain. It also provides efficiency
benefits, especially if the prototype DataFrame had more complex structures or default data that
we wanted to preserve across clones.
Scenario:
Suppose you're working with financial data in pandas, and you want to create a custom extension
array to handle monetary values in multiple currencies. This custom array should be able to store
values like "10 USD", "15 EUR", etc., and provide functionality to convert between currencies.
To make this efficient, especially when creating new monetary arrays based on existing ones,
you'd want to use the Prototype design pattern to clone and modify arrays without re-initializing
the entire data structure.
Code Implementation:
import pandas as pd
import copy
from pandas.api.extensions import ExtensionArray
class MonetaryArray(ExtensionArray):
def __init__(self, data):
self.data = list(data)
def __len__(self):
return len(self.data)
def __repr__(self):
return f"MonetaryArray({self.data})"
# Sample MonetaryArray
arr = MonetaryArray(["10 USD", "15 EUR", "20 GBP"])
56
print("Original Array:", arr)
Explanation:
📌 We start by defining a MonetaryArray class that extends the ExtensionArray from pandas.
This class will represent our custom extension array for monetary values.
📌 The MonetaryArray class has basic methods to handle the data, such as __len__ ,
__getitem__ , and __setitem__ .
📌 The clone method in the MonetaryArray class is where the Prototype design pattern is
implemented. This method creates a deep copy of the current array and then applies any
modifications provided as arguments. This allows us to create new monetary arrays based on
existing ones without re-initializing the entire data structure.
📌 In the sample usage, we create an original MonetaryArray with some values. We then clone
this array and modify some of its values using the clone method.
By using the Prototype design pattern in this scenario, we can efficiently create and modify
custom extension arrays in pandas, ensuring flexibility and performance.
📌 __len__ : This method returns the length of the array. It's used whenever the built-in len()
function is called on an instance of the MonetaryArray .
📌 __getitem__ : This method allows for indexing into the array. For instance, if you want to
retrieve the value at the second position of the array ( arr[1] ), this method is called.
📌 __setitem__ : This method allows for setting the value at a specific index in the array. For
example, if you want to update the value at the second position ( arr[1] = "25 EUR" ), this
method is invoked.
By defining these methods, the MonetaryArray can be used seamlessly within the pandas
ecosystem, and it behaves as expected when used in typical Python scenarios.
57
2. How the Prototype design pattern is being used here, and
how does it help?
Let's break down the usage of the Prototype design pattern in the given example:
We use copy.deepcopy(self) to create a deep copy of the current instance. This ensures
that the cloned array is entirely independent of the original.
In Python, self is a convention for referring to an instance of a class from within the class
itself. When you create an instance of a class, self in any method of that class refers to the
current instance. It's automatically passed as the first argument to instance methods.
When you call the clone method on an instance of MonetaryArray , the self inside that
method refers to the instance on which the method was called.
Now, the line cloned_array = copy.deepcopy(self) is using Python's copy module to create a
deep copy of self (which, in this context, is arr ).
We then apply any modifications provided to the cloned array. This is done using the
__setitem__ method we defined earlier.
We get a new MonetaryArray instance ( modified_arr ) that is based on the original ( arr ) but
with the specified modifications.
3. Flexibility: The Prototype pattern allows us to easily create variations of the original object.
We can make as many clones as we want and modify each one independently, providing a lot
of flexibility in how we use and manage these objects.
In the clone method of the MonetaryArray class, we have the following lines:
Here, we're iterating over the modifications dictionary, which contains indices as keys and new
values as values. For each key-value pair, we're updating the cloned_array with the new value at
the specified index.
When we use the line cloned_array[index] = value , it's essentially a shorthand for calling the
__setitem__ method on the cloned_array object. In other words, the above line is equivalent
to:
cloned_array.__setitem__(index, value)
So, even though we don't explicitly call the __setitem__ method in the clone method, it's
implicitly invoked when we use the assignment operation on the cloned_array with an index.
In Python, special methods like __setitem__ allow us to define custom behaviors for built-in
operations. In this case, the __setitem__ method we defined for the MonetaryArray class allows
us to customize how values are set at specific indices in our custom array.
59
The concept of using the __setitem__ method applies
broadly in Python, but with some nuances. Let's delve
deeper.
📌 Simply put, In Python dictionaries, the __setitem__ method is used to set a key-value pair.
When you do something like my_dict[key] = value , under the hood, Python is calling
my_dict.__setitem__(key, value) .
In Python, many built-in operations or syntactic sugar are backed by special methods (often
referred to as "magic" or "dunder" methods because they have double underscores at the
beginning and end). When you use these built-in operations on objects, the corresponding special
methods are implicitly called.
For the assignment operation using indexing (i.e., obj[index] = value ), the special method that
gets invoked is __setitem__ .
1. Lists: When you do something like my_list[2] = 'value' , you're implicitly calling the
__setitem__ method of the list object.
3. Custom Objects: If you define a custom class and implement the __setitem__ method,
then instances of that class will also use this method when the assignment operation with
indexing is used.
class Example:
def __setitem__(self, index, value):
print(f"Setting value at index {index} to {value}")
obj = Example()
obj[1] = "Hello"
When you run the above code, it will print: Setting value at index 1 to Hello .
This demonstrates that the __setitem__ method was called when we used the assignment
operation with indexing on our custom object.
60
What exactly is pandas ExtensionArray that I used in the
above example
Within pandas , the ExtensionArray is a class that provides a way to store custom data types not
natively supported by pandas . It's a part of the pandas extension system.
1. Custom Data Types: Before the introduction of the extension system, if you wanted to use a
custom data type with pandas , you had to use Python objects, which are slow and memory-
inefficient. With the ExtensionArray , you can create custom data types that are efficient and
can be used just like native pandas data types.
Implementing custom operations for data types not natively supported by pandas .
4. Examples: pandas itself uses the ExtensionArray interface for some of its own data types.
For instance, the Categorical data type in pandas is backed by an ExtensionArray .
5. Integration with DataFrames and Series: Once you've defined an ExtensionArray for
your custom data type, you can use it within pandas DataFrames and Series just like any
other data type. This means you can perform operations, indexing, slicing, etc., on your
custom data type seamlessly.
In summary, the ExtensionArray in pandas provides a way to extend the capabilities of pandas
to support custom data types efficiently. If you're interested in creating a custom data type for use
in pandas , the ExtensionArray is the place to start.
Obviously, you can use copy.deepcopy without creating a Prototype. But having the Prototype
allows you to work with all the copies from the same code, so you can add exceptions, log
messages, or whatever you want when copying without altering all the classes.
Applicability
Use the Prototype pattern when you have a lot of objects to copy.
Use the Prototype pattern when you want to be able to copy objects at runtime with being
able to modify their attributes.
Use the Prototype pattern when you don't want the copy method to be dependent on the
implementation of the classes.
61
Advantages
Easy implementation: unlike all the creational patterns, it's easy to implement a Prototype
and it doesn't require a lot of classes.
More code flexibility: because you can alter the values of the objects you want to copy, so you
don't have to create tons of subclasses.
Disadvantage
The main disadvantage is that it can make the code more demanding if you don't work with a lot
of objects. So, for small projects, it's better not to use this pattern.
The Abstract Factory pattern is a creational design pattern that provides an interface for creating
families of related or dependent objects without specifying their concrete classes. It's particularly
useful when a system needs to maintain flexibility and scalability. Let's delve into its main
principles:
📌 Interface for Creating Families of Objects: The primary role of an Abstract Factory is to
declare an interface for creating products. Each "product" is a member of a "family," and these
families are meant to be used together.
📌 Concrete Factories Implement the Interface: Concrete factories implement this interface to
produce objects that conform to a family. These factories take care of the instantiation of the
family of objects.
📌 Products Share a Common Interface: Within each family, the products share a common
interface. This ensures that the family of objects created by the factory are interchangeable and
can work together seamlessly.
62
📌 Client Code is Isolated from Concrete Products: The client code interacts solely with the
abstract factory and the abstract products, thereby isolating itself from the concrete classes. This
adheres to the Dependency Inversion Principle, which states that high-level modules should not
depend on low-level modules; both should depend on abstractions.
📌 Ease of Extensibility: To add a new family of products, you typically need to create a new
concrete factory and implement the abstract factory interface. This makes the system highly
extensible.
📌 Consistency Among Products: Since a factory is responsible for creating an entire family of
related products, it's easier to ensure that these products will function correctly together.
📌 Separation of Concerns: The pattern separates the code for complex object creation from the
code that actually uses these objects. This makes the codebase easier to manage and test.
📌 Single Responsibility Principle: Each concrete factory is responsible for creating objects of a
single family but can create as many objects from that family as needed. This aligns with the Single
Responsibility Principle, which states that a class should have only one reason to change.
📌 Open/Closed Principle: The system is open for extension but closed for modification. You can
introduce new types of products or families by adding new concrete factory classes, without
altering existing code. This adheres to the Open/Closed Principle, which suggests that software
entities should be open for extension but closed for modification.
In summary, the Abstract Factory pattern is a robust architectural pattern that helps manage
object creation complexity, promotes consistency among objects, and facilitates a high level of
flexibility and extensibility. It does so by decoupling the client code that needs some objects from
the classes that actually produce those objects.
Consider a GUI library that provides buttons and checkboxes. If we want to support multiple
themes (e.g., Windows and MacOS), without the Abstract Factory pattern, we might do something
like this:
class WindowsButton:
def render(self):
return "Rendering a Windows style button"
class MacOSButton:
def render(self):
return "Rendering a MacOS style button"
class WindowsCheckbox:
def render(self):
return "Rendering a Windows style checkbox"
class MacOSCheckbox:
63
def render(self):
return "Rendering a MacOS style checkbox"
def create_ui(theme):
if theme == "Windows":
button = WindowsButton()
checkbox = WindowsCheckbox()
elif theme == "MacOS":
button = MacOSButton()
checkbox = MacOSCheckbox()
else:
raise ValueError("Unknown theme")
print(button.render())
print(checkbox.render())
create_ui("Windows")
create_ui("MacOS")
📌 If we want to add support for another theme, we have to modify the create_ui function,
which violates the Open/Closed Principle.
📌 The system is not scalable. For every new widget or theme, we have to modify existing code.
To solve the above issues, we'll introduce the Abstract Factory pattern:
@abstractmethod
def create_checkbox(self):
pass
class WindowsFactory(GUIFactory):
def create_button(self):
return WindowsButton()
def create_checkbox(self):
return WindowsCheckbox()
class MacOSFactory(GUIFactory):
def create_button(self):
return MacOSButton()
64
def create_checkbox(self):
return MacOSCheckbox()
print(button.render())
print(checkbox.render())
# Client code
windows_factory = WindowsFactory()
create_ui(windows_factory)
macos_factory = MacOSFactory()
create_ui(macos_factory)
📌 New themes can be added without modifying the existing code, adhering to the Open/Closed
Principle.
📌 The system is now more scalable. For every new widget or theme, we just need to add a new
factory without modifying the existing factories or products.
In conclusion, the Abstract Factory pattern provides a way to encapsulate a group of individual
factories that have a common theme without specifying their concrete classes. This promotes
code organization, scalability, and adherence to SOLID principles.
Let's delve deeper into how the refactored code with the
Abstract Factory pattern addresses the issues of the original
code.
📌 Issue 1: The create_ui function in the original code was directly responsible for creating
objects of buttons and checkboxes, violating the Single Responsibility Principle.
Solution with Abstract Factory: In the refactored code, the responsibility of creating objects is
shifted from the create_ui function to the factories. The create_ui function now only needs to
know about the abstract factory ( GUIFactory ) and doesn't concern itself with the concrete
implementations. This means the function has a single responsibility: to use the factory to create
and render UI components.
65
📌 Issue 2: In the original code, if we wanted to add support for another theme, we had to modify
the create_ui function, violating the Open/Closed Principle.
Solution with Abstract Factory: With the Abstract Factory pattern, adding support for a new
theme (e.g., "Linux") would involve creating a new factory (e.g., LinuxFactory ) that implements
the GUIFactory interface. The create_ui function remains unchanged. This means the existing
code is closed for modification but open for extension, adhering to the Open/Closed Principle.
📌 Issue 3: The original system was not scalable. For every new widget or theme, we had to
modify existing code.
Solution with Abstract Factory: The Abstract Factory pattern promotes scalability in multiple
ways:
1. Adding a new theme: As mentioned above, to support a new theme, we simply introduce a
new factory without touching existing factories or the main UI creation logic.
Create concrete implementations for the new component for each theme (e.g.,
WindowsSlider , MacOSSlider ).
Even in this case, the main UI creation logic ( create_ui function) remains untouched. It
would only change if we want to utilize the new component in the UI.
In essence, the Abstract Factory pattern decouples the creation of objects from the main logic,
ensuring that each part of the code adheres to the Single Responsibility Principle. This decoupling
also ensures that the system remains scalable and extensible, allowing for the easy addition of
new themes or components without major code changes.
class db_factory(ABC):
@abstractmethod
def create_no_sql_db(self):
pass
@abstractmethod
def create_sql_db(self):
pass
class web_factory(db_factory):
def create_no_sql_db(self):
return mongodb()
def create_sql_db(self):
66
return SQL()
class intranet_factory(db_factory):
def create_no_sql_db(self):
return orientdb()
def create_sql_db(self):
return Oracle()
class sql_database(ABC):
@abstractmethod
def save(self):
pass
@abstractmethod
def select(self):
pass
class SQL(sql_database):
def save(self):
print ("SQL save called.")
def select(self):
print("SQL select called.")
class Oracle(sql_database):
def save(self):
print ("Oracle save called.")
def select(self):
print("Oracle select called")
class no_sql_database(ABC):
@abstractmethod
def insert(self):
pass
@abstractmethod
def get_object(self):
pass
class mongodb(no_sql_database):
def insert(self):
print("mongodb insert called.")
def get_object(self):
print("mongodb get_object called.")
class orientdb(no_sql_database):
def insert(self):
print("orientdb insert called.")
def get_object(self):
print("orientdb get_object called.")
class client:
def get_database(self):
abs_factory = web_factory()
67
sql_factory = abs_factory.create_sql_db()
sql_factory.save()
sql_factory.select()
# -------------------------------------------
abs_factory = web_factory()
sql_factory = abs_factory.create_no_sql_db()
sql_factory.insert()
sql_factory.get_object()
# -------------------------------------------
abs_factory = intranet_factory()
ora_factory = abs_factory.create_sql_db()
ora_factory.save()
ora_factory.select()
# -------------------------------------------
abs_factory = intranet_factory()
ora_factory = abs_factory.create_no_sql_db()
ora_factory.insert()
ora_factory.get_object()
client = client()
client.get_database()
Above is an implementation of the Abstract Factory design pattern. This pattern allows you to
produce families of related or dependent objects without specifying their concrete classes. Let's
break down the code.
📌 The db_factory class is an abstract class that serves as the Abstract Factory. It declares two
abstract methods: create_no_sql_db and create_sql_db . These methods are intended to
create objects that conform to the no_sql_database and sql_database interfaces, respectively.
📌 The web_factory and intranet_factory classes inherit from db_factory . These are
Concrete Factories. They implement the abstract methods and return instances of concrete
classes ( mongodb , SQL , Oracle , orientdb ) that implement the no_sql_database or
sql_database interfaces.
📌 The sql_database and no_sql_database classes are abstract classes that define the
interface for SQL and NoSQL databases. They declare methods like save , select , insert , and
get_object that must be implemented by any concrete classes.
68
📌 The SQL , Oracle , mongodb , and orientdb classes are Concrete Products. They implement
the sql_database and no_sql_database interfaces and provide the actual implementation for
the methods declared in those interfaces.
📌 The client class demonstrates how to use these factories and products. It creates instances
of web_factory and intranet_factory , uses them to create database objects, and then calls
methods on those objects.
The Abstract Factory pattern is particularly useful when the system needs to be independent of
how its objects are created, composed, and represented, and the system is configured with
multiple families of objects.
📌 Client Code Isolated from Concrete Products: The client class interacts only with the
abstract factory ( db_factory ) and the product interfaces ( sql_database and no_sql_database ).
It doesn't instantiate concrete classes directly, adhering to the Dependency Inversion Principle.
📌 Ease of Extensibility: Adding a new type of database or a new application context (like
mobile_factory for mobile apps) would involve creating new concrete factories and products
that implement the existing interfaces. This makes the system highly extensible without modifying
existing code.
📌 Consistency Among Products: The web_factory and intranet_factory ensure that the
SQL and NoSQL databases they create are consistent and can work seamlessly within their
respective application contexts (web or intranet).
📌 Separation of Concerns: The object creation logic is separated from the business logic. The
client class doesn't need to know how to instantiate database objects; it delegates this
responsibility to the abstract factory, making the code easier to manage and test.
📌 Open/Closed Principle: The design is open for extension but closed for modification. New
types of databases or application contexts can be added without altering the existing factories or
product interfaces.
69
By adhering to these principles, the code maintains a high level of modularity, flexibility, and
extensibility, which are key benefits of the Abstract Factory pattern.
📌 Purpose: - Factory Pattern: It deals with the problem of creating objects without specifying
the exact class of object that will be created. It defines an interface for creating an instance of a
class, with its subclasses deciding which class to instantiate. - Abstract Factory Pattern: It
addresses the problem of creating families of related or dependent objects without specifying
their concrete classes. It provides an interface for creating families of related or dependent
objects.
📌 Number of Abstract Classes: - Factory Pattern: Typically involves a single creator class
(Factory) and a single product class or interface. - Abstract Factory Pattern: Involves multiple
Factory classes and multiple Product classes or interfaces. The pattern defines an interface for
creating several related or dependent objects.
📌 Level of Abstraction: - Factory Pattern: It's about creating objects. The main focus is on
using a method to produce instances of one class, without specifying the exact class. - Abstract
Factory Pattern: It's about creating families of related objects. The main focus is on providing a
way to produce families of related objects without having to specify concrete classes.
📌 Implementation: - Factory Pattern: Often involves a method (static or instance) that, based
on input or configuration, creates and returns instances of one of several possible classes. -
Abstract Factory Pattern: Involves multiple Factory methods, each responsible for creating a
different kind of object. The client interacts with the abstract factory to get the objects, ensuring
that it gets a family of related objects.
📌 Extensibility: - Factory Pattern: To add a new type of product, you might need to modify the
factory method logic or extend the factory class. - Abstract Factory Pattern: To introduce a new
family of products, you can add a new concrete factory without modifying existing code, adhering
to the Open/Closed Principle.
📌 Use Cases: - Factory Pattern: Best suited when there's a need to manage and maintain
objects of one particular type, and the exact type might be decided at runtime. - Abstract Factory
Pattern: Useful when the system needs to be independent of how its objects are created,
composed, and represented, and the system is configured with multiple families of objects.
In essence, while both patterns deal with object creation, the Factory pattern focuses on a single
product, whereas the Abstract Factory pattern emphasizes a family of products. The Abstract
Factory pattern can be seen as a higher-level abstraction of the Factory pattern.
70
Example-2 Real-life Use-Case Code
Let's consider a real-life scenario where you have different types of payment gateways like Stripe
and PayPal, and each gateway has different types of payments like one-time and subscription.
class PaymentFactory(ABC):
@abstractmethod
def create_one_time_payment(self):
pass
@abstractmethod
def create_subscription_payment(self):
pass
class StripeFactory(PaymentFactory):
def create_one_time_payment(self):
return StripeOneTime()
def create_subscription_payment(self):
return StripeSubscription()
class PayPalFactory(PaymentFactory):
def create_one_time_payment(self):
return PayPalOneTime()
def create_subscription_payment(self):
return PayPalSubscription()
class Payment(ABC):
@abstractmethod
def process(self):
pass
class StripeOneTime(Payment):
def process(self):
print("Processing one-time payment through Stripe.")
class StripeSubscription(Payment):
def process(self):
print("Processing subscription through Stripe.")
class PayPalOneTime(Payment):
def process(self):
print("Processing one-time payment through PayPal.")
class PayPalSubscription(Payment):
def process(self):
print("Processing subscription through PayPal.")
class Client:
def make_payment(self, factory_type):
factory = factory_type()
one_time = factory.create_one_time_payment()
subscription = factory.create_subscription_payment()
one_time.process()
71
subscription.process()
client = Client()
client.make_payment(StripeFactory)
client.make_payment(PayPalFactory)
📌 StripeFactory and PayPalFactory are Concrete Factories. They implement the abstract
methods and return instances of concrete classes ( StripeOneTime , StripeSubscription ,
PayPalOneTime , PayPalSubscription ) that implement the Payment interface.
📌 The Payment interface declares a process method, which is implemented by all concrete
payment types.
📌 The Client class demonstrates how to use these factories. It takes a factory type as an
argument, creates a factory object, and then uses it to create payment objects.
The Abstract Factory pattern allows you to switch easily between different families of related
objects (Stripe and PayPal in this case) by changing just the factory type. This makes the system
more modular, easier to extend, and easier to maintain.
Using the Abstract Factory pattern, you can ensure that your application uses the correct GUI
elements for the OS it's running on without hardcoding specific classes.
# Abstract Factory
class GUIFactory(ABC):
@abstractmethod
def create_button(self):
pass
@abstractmethod
def create_checkbox(self):
pass
@abstractmethod
def create_window(self):
pass
72
def create_checkbox(self):
return WindowsCheckbox()
def create_window(self):
return WindowsWindow()
def create_checkbox(self):
return MacOSCheckbox()
def create_window(self):
return MacOSWindow()
# Concrete Product A1
class WindowsButton(Button):
def paint(self):
print("Rendering a button in Windows style.")
# Concrete Product A2
class MacOSButton(Button):
def paint(self):
print("Rendering a button in MacOS style.")
# Concrete Product B1
class WindowsCheckbox(Checkbox):
def paint(self):
print("Rendering a checkbox in Windows style.")
# Concrete Product B2
class MacOSCheckbox(Checkbox):
def paint(self):
print("Rendering a checkbox in MacOS style.")
73
# Concrete Product C1
class WindowsWindow(Window):
def paint(self):
print("Rendering a window in Windows style.")
# Concrete Product C2
class MacOSWindow(Window):
def paint(self):
print("Rendering a window in MacOS style.")
# Client code
class Application:
def __init__(self, factory: GUIFactory):
self.button = factory.create_button()
self.checkbox = factory.create_checkbox()
self.window = factory.create_window()
def paint(self):
self.button.paint()
self.checkbox.paint()
self.window.paint()
📌 In this example, GUIFactory is the Abstract Factory that declares methods for creating a
family of GUI elements (buttons, checkboxes, windows).
📌 Button , Checkbox , and Window are abstract products, and their concrete implementations
( WindowsButton , MacOSButton , etc.) define how these elements should look and behave on each
OS.
📌 The Application class, which represents the client in this scenario, uses the factory to create
and interact with GUI elements. Depending on which factory is provided (Windows or MacOS), the
application will render the appropriate GUI elements.
This design ensures that the application remains decoupled from the specific GUI elements of an
OS, making it easier to add support for new OSs in the future.
Let's break down how the provided GUI elements example adheres to the principles and
requirements of the Abstract Factory design pattern:
📌 Interface for Creating Families of Objects: - The GUIFactory class serves as the Abstract
Factory. It declares an interface ( create_button , create_checkbox , and create_window ) for
creating a family of GUI elements. These families are the GUI components tailored for different
operating systems.
74
# Abstract Factory
class GUIFactory(ABC):
@abstractmethod
def create_button(self):
pass
@abstractmethod
def create_checkbox(self):
pass
@abstractmethod
def create_window(self):
pass
def create_checkbox(self):
return WindowsCheckbox()
def create_window(self):
return WindowsWindow()
def create_checkbox(self):
return MacOSCheckbox()
def create_window(self):
return MacOSWindow()
📌 Products Share a Common Interface: - The Button , Checkbox , and Window abstract classes
define common interfaces for all GUI elements of their type. Concrete classes like WindowsButton ,
MacOSButton , WindowsCheckbox , MacOSCheckbox , etc., implement these interfaces. This ensures
that the GUI elements are interchangeable within their respective families, and the application can
use them without knowing their concrete implementations.
75
# Concrete Product A1
class WindowsButton(Button):
def paint(self):
print("Rendering a button in Windows style.")
# Concrete Product A2
class MacOSButton(Button):
def paint(self):
print("Rendering a button in MacOS style.")
📌 Client Code Isolated from Concrete Products: - The Application class (acting as the client)
interacts only with the abstract factory ( GUIFactory ) and the product interfaces ( Button ,
Checkbox , Window ). It doesn't instantiate concrete classes directly, ensuring a decoupling from
the specific GUI elements of an OS.
📌 Ease of Extensibility: - To introduce a new OS or a new type of GUI element, you'd create new
concrete factories and products that implement the existing interfaces. This design ensures that
the system remains extensible without modifying existing code. For instance, adding support for a
Linux GUI would involve creating a LinuxGUIFactory and associated concrete products like
LinuxButton .
📌 Separation of Concerns: - The object creation logic is separated from the business logic. The
Application class doesn't need to know how to instantiate GUI elements; it delegates this
responsibility to the abstract factory. This separation makes the codebase easier to manage, test,
and extend.
📌 Open/Closed Principle: - The design is open for extension but closed for modification. New
types of GUI elements or new OS support can be added without altering the existing factories or
product interfaces. This ensures that the system remains adaptable to future requirements
without necessitating changes to established code.
In summary, the provided GUI elements example demonstrates a well-structured use of the
Abstract Factory pattern. It ensures that the application remains decoupled from specific GUI
implementations, promotes consistency across GUI components, and maintains a high level of
modularity and extensibility.
76
🐍🚀 The builder design pattern in Python 🐍🚀
It is useful for managing objects that consist of multiple parts that need to be implemented
sequentially. By decoupling the construction of an object and its representation, the builder
pattern allows us to reuse a construction multiple times.
Imagine that we want to create an object that is composed of multiple parts and the composition
needs to be done step by step. The object is not complete unless all its parts are fully created.
That's where the builder-design pattern can help us. The builder pattern separates the
construction of a complex object from its representation. By keeping the construction separate
from the representation, the same construction can be used to create several different
representations
📌 When might we require this design pattern? Envision a scenario where object generation
involves a series of steps and consists of nested components with various data types. In such
contexts, the builder design pattern proves invaluable, allowing us to navigate this intricate task
efficiently.
📌 Many design patterns are aptly named from a linguistic perspective, and the builder design
pattern stands as a testament to this. The term "build" is noteworthy. The emphasis is on the
"building" aspect rather than merely "creating." The primary focus of this pattern revolves around
the object's creation process.
📌 Visualizing the builder design pattern can be likened to an assembly line. In this analogy, the
focus is on the assembly rather than the components. The assembly orchestrates the culmination
of the end product, irrespective of the specific parts utilized. Depending on design configurations,
varying outcomes can be produced from the same line, highlighting the importance of effective
abstraction.
📌 We're spared the task of recalling all property names, structures, data types, and routes during
object instantiation. The Builder pattern abstracts this procedure. As a result, we can sidestep the
intricate specifics of any part of a multifaceted object.
📌 The Builder pattern may bear a resemblance to factory patterns, yet they diverge. The Builder
oversees the object's creation journey. In contrast, Factory or Abstract Factory design patterns
assume the role of object generation. These patterns might, for instance, employ the Builder
Pattern to steer the creation process.
77
Use Cases and Explanations:
📌 Decoupling Construction from Representation: In many real-world scenarios, the process of
constructing an object is distinct from the object's representation. For instance, consider the
process of building a house. The steps to build the house (laying the foundation, erecting walls,
installing the roof) are the same, but the final representation (design, color, interior) can vary. The
builder pattern allows us to encapsulate these construction steps and use them to create various
representations.
📌 Fluent Interface: The builder pattern often provides a fluent interface, where methods return
the builder object itself, allowing for method chaining. This makes the client code more readable
and intuitive.
📌 Immutable Objects: Once the object is built, it can be made immutable, ensuring that its state
cannot be changed. This is particularly useful in multi-threaded environments where immutability
can prevent potential synchronization issues.
class Car:
def __init__(self):
self._parts = []
def list_parts(self):
return ", ".join(self._parts)
class CarBuilder:
def __init__(self):
self._car = Car()
def add_engine(self):
self._car.add("Engine")
return self
def add_wheels(self):
self._car.add("Wheels")
return self
def add_doors(self):
self._car.add("Doors")
return self
def build(self):
78
return self._car
# Client code
builder = CarBuilder()
car = builder.add_engine().add_wheels().add_doors().build()
print(car.list_parts())
# Engine, Wheels, Doors
Description:
📌 In the above code, the Car class represents the product we want to build. It has a method
add to add parts and a method list_parts to list all the added parts.
📌 The CarBuilder class is our builder. It provides methods to add different parts to the car
( add_engine , add_wheels , add_doors ). Each of these methods returns the builder object itself,
allowing for method chaining.
📌 In the client code, we create an instance of the CarBuilder , sequentially add parts using the
fluent interface, and finally call the build method to get the constructed Car object.
📌 The advantage here is that the construction process of the car is abstracted away from its
representation. We can easily change the way a car is built without affecting the car's
representation or the client code.
In essence, the builder pattern provides a clear separation of concerns, making the code modular
and maintainable. It's particularly useful when an object needs to be created with many optional
components or configurations.
class Computer:
def __init__(self, CPU, RAM, storage, graphics_card, power_supply,
motherboard):
self.CPU = CPU
self.RAM = RAM
self.storage = storage
self.graphics_card = graphics_card
self.power_supply = power_supply
self.motherboard = motherboard
def display(self):
return f"Computer with {self.CPU} CPU, {self.RAM} RAM, {self.storage}
storage, {self.graphics_card} graphics card, {self.power_supply} power supply,
and {self.motherboard} motherboard."
79
computer = Computer("Intel i9", "32GB", "1TB SSD", "NVIDIA RTX 3090", "750W",
"ASUS ROG")
print(computer.display())
2. If we want to create a computer with only a few components and leave out others, this design
doesn't allow for that flexibility.
3. The construction process is tightly coupled with the representation of the computer.
class Computer:
def __init__(self):
self.components = {}
def display(self):
return ", ".join([f"{key} with {value}" for key, value in
self.components.items()])
class ComputerBuilder:
def __init__(self):
self.computer = Computer()
80
def add_storage(self, storage):
self.computer.set_storage(storage)
return self
def build(self):
return self.computer
print(computer.display())
2. The builder provides a clear and fluent interface to create a Computer object step by step.
3. It's flexible; we can choose which components to add and in which order.
By using the builder pattern, we've made the process of creating a Computer object more
intuitive, flexible, and less prone to errors.
Let's break down how the refactored code with the builder
design pattern addresses the issues of the original code.
📌 Issue 1: The constructor of the Computer class is too long and can be error-prone.
Solution: In the refactored code, the Computer class no longer has a long constructor.
Instead, each component has its own setter method. The ComputerBuilder class provides a
clear interface to add components to the Computer object. This way, there's no need to
remember the order of arguments, reducing the chances of errors.
📌 Issue 2: If we want to create a computer with only a few components and leave out
others, the original design doesn't allow for that flexibility.
81
Solution: With the builder pattern, we have the flexibility to add only the components we
want. If we decide not to add a certain component, we simply don't call its corresponding
method in the builder. For instance, if we don't want to add a graphics card, we can omit the
.add_graphics_card() method when constructing the computer. This provides a more
flexible approach to object creation.
📌 Issue 3: The construction process is tightly coupled with the representation of the
computer.
Solution: The builder pattern decouples the construction process from the representation. In
the refactored code, the Computer class is only responsible for representing the computer
and storing its components. The construction process is handled by the ComputerBuilder
class. This separation of concerns makes the code more modular and easier to maintain.
3. Clear Separation of Responsibilities: The builder pattern ensures that each class has a
single responsibility. The Computer class is only responsible for representing a computer,
while the ComputerBuilder class handles the construction process. This separation makes
the code more organized and adheres to the single responsibility principle.
In summary, by implementing the builder design pattern, we've addressed the issues of the
original code, making it more flexible, maintainable, and less error-prone. The builder pattern
provides a clear and intuitive interface for object construction, ensuring that the code is scalable
and adheres to good software design principles.
📌 Understanding the Requirement: Suppose you have two classes, ClassA and ClassB . You
want to merge attributes and methods from both classes into a new class, MergedClass .
📌 Designing the Builder: The builder should be able to: 1. Accept instances of ClassA and
ClassB . 2. Extract attributes and methods from these instances. 3. Construct a new MergedClass
with the combined attributes and methods.
📌 Implementation:
class ClassA:
82
def __init__(self, attr_a):
self.attr_a = attr_a
def method_a(self):
return f"Method A called with attribute {self.attr_a}"
class ClassB:
def __init__(self, attr_b):
self.attr_b = attr_b
def method_b(self):
return f"Method B called with attribute {self.attr_b}"
class MergedClassBuilder:
def __init__(self):
self.attrs = {}
self.methods = {}
def build(self):
merged = type("MergedClass", (), {})
for attr, value in self.attrs.items():
setattr(merged, attr, value)
for method_name, method in self.methods.items():
setattr(merged, method_name, method)
return merged
# Usage:
instance_a = ClassA("Attribute A")
instance_b = ClassB("Attribute B")
builder = MergedClassBuilder()
MergedClass = builder.add_class_a(instance_a).add_class_b(instance_b).build()
merged_instance = MergedClass()
print(merged_instance.method_a())
print(merged_instance.method_b())
3. The build method of the builder creates a new class, MergedClass , with combined
attributes and methods from the added instances.
4. In the usage section, instances of ClassA and ClassB are created and added to the builder.
The builder then creates the MergedClass , and we instantiate and use it.
This approach allows for dynamic merging of classes using the builder pattern. It's a flexible way
to combine attributes and methods from multiple classes into a single class without modifying the
original classes.
Non-Builder Approach:
The direct initialization of such profiles can be cumbersome given the plethora of attributes an
employee might have.
class EmployeeProfile:
def __init__(self, name, age, address, phone, email, position, department,
salary, manager, hire_date, previous_jobs):
self.name = name
self.age = age
self.address = address
self.phone = phone
self.email = email
self.position = position
self.department = department
self.salary = salary
self.manager = manager
self.hire_date = hire_date
self.previous_jobs = previous_jobs
def display(self):
# Logic to display profile details.
pass
# Usage:
john_profile = EmployeeProfile("John Doe", 30, "123 St, City", "123-456-7890",
"john@example.com", "Engineer", "IT", 60000, "Jane Smith", "2022-01-01",
["Developer", "Intern"])
📌 Issues:
1. Lengthy Constructor: As evident, the constructor is lengthy and hard to manage.
84
2. Inflexibility: Not all attributes might be available at once when creating a profile.
4. Difficult to Extend: Adding more attributes to the profile would mean changing the
constructor and all instantiations.
class EmployeeProfileBuilder:
def __init__(self):
self.profile_data = {}
def build(self):
return FinalEmployeeProfile(self.profile_data)
class FinalEmployeeProfile:
def __init__(self, profile_data):
self.profile_data = profile_data
def display(self):
# Logic to display profile details.
pass
# Usage:
builder = EmployeeProfileBuilder()
john_profile = (builder.set_name("John Doe")
.set_age(30)
.set_address("123 St, City")
# ... set other attributes ...
.build())
2. Flexibility: Can set only the attributes that are available at the time of profile creation.
85
3. Reduced Errors: The self-documenting nature of named methods minimizes the chance of
errors.
4. Easy Extension: If a new attribute is to be added to the profile in the future, just add a new
setter method to the builder without affecting existing client code.
By leveraging the Builder pattern in this context, we've streamlined the process of constructing an
employee profile, making the code more manageable, readable, and scalable.
1. Lengthy Constructor:
📌 Original Issue: In the initial approach, the EmployeeProfile class had a constructor with a
significant number of parameters. With such a setup, it becomes cumbersome to remember the
order of parameters, and it can also lead to a situation where some of them might be mistakenly
swapped or missed.
📌 Solution with Builder Pattern: The builder pattern breaks down the construction process by
providing individual methods for each attribute ( set_name , set_age , set_address , etc.). This
provides a clear structure for setting up the employee profile without having to remember or deal
with a long list of constructor arguments.
📌 Solution with Builder Pattern: With the builder, one can choose which attributes to set. This
is handy when not all details about an employee are available immediately. The profile can be
built incrementally, setting only the available attributes, leading to a more flexible initialization
process.
3. Error-Prone Initialization:
📌 Original Issue: Due to the many parameters in the original constructor, there's a heightened
risk of errors. It's easy to misplace arguments or mistakenly swap their order, which can lead to
incorrect object initialization.
📌 Solution with Builder Pattern: The use of descriptive method names for setting attributes
ensures that the developer knows exactly which attribute is being set at each step. This
significantly reduces the risk of accidentally setting incorrect values.
86
4. Scalability and Maintenance:
📌 Original Issue: If the organization decided to store additional details about an employee, it
would require changes to the constructor and all places where the EmployeeProfile was
instantiated.
📌 Solution with Builder Pattern: Introducing new attributes becomes straightforward with the
builder pattern. One can simply add a new setter method in the EmployeeProfileBuilder class.
Existing code where profiles are created remains unaffected, ensuring seamless integration of
new features without disturbing existing functionalities.
5. Readability:
📌 Original Issue: With a long list of parameters, it becomes hard to discern what each
parameter signifies, especially if they're of the same data type.
📌 Solution with Builder Pattern: The refactored approach, with its fluent interface, provides a
step-by-step, readable structure for creating an object. The sequence of method calls, due to their
descriptive names, provides clarity and can be read almost like a series of straightforward
instructions.
In conclusion, the Builder design pattern brings a plethora of advantages when constructing
objects with numerous attributes or complex initialization steps. By providing a clear separation
between the construction process and the final representation, it ensures flexibility, robustness,
and maintainability.
87
📌 The Chain of Responsibility pattern is a behavioral design pattern that allows an object to pass
a request through a chain of potential handlers until an object handles it or the end of the chain is
reached. It decouples the sender from the receiver by letting more than one object handle a
request.
📌 Use Cases: - Event handling systems where events can be handled by multiple handlers, and
handlers have a priority or a logic to decide if they should handle the event or pass it on. -
Middleware in web frameworks where each middleware processes a request and then passes it to
the next middleware in the chain. - Input validation systems where various validation checks are
applied one after the other.
📌 The primary advantage of this pattern is that it reduces the coupling between the sender of a
request and its receivers. It also allows for dynamic addition or removal of responsibilities from
objects.
class FileProcessor:
def process(self, file_type, content):
if file_type == "text":
print(f"Processing text file with content: {content}")
elif file_type == "image":
print(f"Processing image file with content: {content}")
elif file_type == "audio":
print(f"Processing audio file with content: {content}")
else:
print(f"File type {file_type} not supported")
# Usage
processor = FileProcessor()
processor.process("text", "Hello World!")
processor.process("video", "Video Content")
2. If we need to add support for a new file type, we have to modify the FileProcessor class,
violating the Open/Closed Principle.
3. If there's a specific order in which files need to be processed, it's hard to manage with the
current structure.
88
class Handler:
def __init__(self, next_handler=None):
self.next_handler = next_handler
class TextFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "text":
print(f"Processing text file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)
class ImageFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "image":
print(f"Processing image file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)
class AudioFileHandler(Handler):
def handle(self, file_type, content):
if file_type == "audio":
print(f"Processing audio file with content: {content}")
elif self.next_handler:
self.next_handler.handle(file_type, content)
# Usage
text_handler.handle("text", "Hello World!")
text_handler.handle("video", "Video Content")
📌 Conclusion:
The Chain of Responsibility pattern allows us to decouple the sender from the receiver and
provides a way to pass a request through a set of handlers. It promotes single responsibility and
open/closed principles, making the system more flexible and maintainable.
89
Let's delve into the details of how the refactored code, which
implements the Chain of Responsibility design pattern,
addresses the issues present in the original code.
📌 Issue 1: Lack of Modularity and Maintainability
Original Code: In the initial code, the FileProcessor class was responsible for handling all
file types. This means that every time a new file type needs to be added or an existing one
needs to be modified, you'd have to change the FileProcessor class. This makes the class
less modular and harder to maintain.
Refactored Code: With the Chain of Responsibility pattern, each file type has its own
dedicated handler ( TextFileHandler , ImageFileHandler , AudioFileHandler ). This
separation ensures that each handler class has a single responsibility, making the system
more modular. If there's a need to modify how a particular file type is processed, only the
corresponding handler needs to be touched.
Refactored Code: With the Chain of Responsibility pattern, adding support for a new file type
doesn't require modifying existing handlers. Instead, you'd create a new handler for the new
file type and simply link it into the existing chain. This ensures that the system is extendable
without needing to change existing code, adhering to the Open/Closed Principle.
Refactored Code: With the Chain of Responsibility pattern, managing the order of processing
becomes straightforward. The order is determined by how the chain of handlers is
constructed. If you need to change the order, you can easily rearrange the chain without
touching the internal logic of individual handlers.
📌 Additional Benefits:
Flexibility: The Chain of Responsibility pattern provides flexibility in distributing
responsibilities among handler objects. If, in the future, a certain handler needs to perform
additional checks or operations before deciding whether to handle a request or pass it on, it
can be done without affecting other handlers.
Decoupling: The sender of a request (in this case, the client code calling the handle method)
is decoupled from its receivers (the chain of handlers). The client only interacts with the first
handler in the chain and doesn't need to know about the internal structure of the chain.
In conclusion, the Chain of Responsibility design pattern provides a robust solution to the issues
present in the original code, making the system more modular, maintainable, and in line with solid
software design principles.
class Handler:
def __init__(self, successor=None):
self.successor = successor
class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")
class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
else:
print("Authorization failed!")
class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)
# Client code
request = {
"user_id": 123,
"token": "VALID_TOKEN",
"role": "ADMIN"
}
chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))
chain.handle(request)
91
In the client code, a request is created with a user ID, token, and role. The request is then
passed through a chain of handlers: Logging -> Authorization -> Authentication.
If the request passes through all the handlers without any issues, it means it's been
successfully processed. If any handler can't process the request, it won't pass it to its
successor.
📌 Under the hood, this pattern promotes the Single Responsibility Principle. Each handler has a
single responsibility, and it either handles the request or passes it to the next handler. This makes
the system extensible and easy to maintain. If a new check needs to be added, a new handler can
be introduced without modifying the existing code.
📌 The Chain of Responsibility pattern is a behavioral design pattern that allows an object to pass
a request through a chain of potential handlers until an object handles it or the end of the chain is
reached. It decouples the sender from the receiver by letting more than one object handle a
request.
📌 Use Cases: - Event handling systems where events can be handled by multiple handlers, and
handlers have a priority or a logic to decide if they should handle the event or pass it on. -
Middleware in web frameworks where each middleware processes a request and then passes it to
the next middleware in the chain. - Input validation systems where various validation checks are
applied one after the other.
📌 The primary advantage of this pattern is that it reduces the coupling between the sender of a
request and its receivers. It also allows for dynamic addition or removal of responsibilities from
objects.
class Handler:
def __init__(self, successor=None):
self.successor = successor
class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")
class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
92
else:
print("Authorization failed!")
class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)
# Client code
request = {
"user_id": 123,
"token": "VALID_TOKEN",
"role": "ADMIN"
}
chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))
chain.handle(request)
In the client code, a request is created with a user ID, token, and role. The request is then
passed through a chain of handlers: Logging -> Authorization -> Authentication.
If the request passes through all the handlers without any issues, it means it's been
successfully processed. If any handler can't process the request, it won't pass it to its
successor.
📌 Under the hood, this pattern promotes the Single Responsibility Principle. Each handler has a
single responsibility, and it either handles the request or passes it to the next handler. This makes
the system extensible and easy to maintain. If a new check needs to be added, a new handler can
be introduced without modifying the existing code.
📌 Chain of Handlers: The code establishes a clear chain of handlers. The LoggingHandler
passes the request to AuthorizationHandler , which in turn passes it to
AuthenticationHandler . This chain is established during the instantiation of the handlers:
93
chain = LoggingHandler(AuthorizationHandler(AuthenticationHandler()))
📌 Single Responsibility Principle: Each handler in the chain has a specific responsibility. -
AuthenticationHandler is solely responsible for checking the validity of the token. -
AuthorizationHandler checks the role of the user. - LoggingHandler logs the request.
This ensures that each handler is only concerned with a specific task, making the code modular
and easy to modify or extend.
class AuthenticationHandler(Handler):
def handle(self, request):
if request.get("token") == "VALID_TOKEN":
print("Authentication successful!")
super().handle(request)
else:
print("Authentication failed!")
class AuthorizationHandler(Handler):
def handle(self, request):
if request.get("role") == "ADMIN":
print("Authorization successful!")
super().handle(request)
else:
print("Authorization failed!")
class LoggingHandler(Handler):
def handle(self, request):
print(f"Logging request from user: {request.get('user_id')}")
super().handle(request)
📌 Stopping or Continuing the Chain: One of the key aspects of the Chain of Responsibility
pattern is the ability of any handler in the chain to stop further processing. In the provided code, if
the AuthenticationHandler finds an invalid token, it prints "Authentication failed!" and doesn't
call its successor. Similarly, if the AuthorizationHandler finds an unauthorized role, it won't pass
the request to its successor.
📌 Dynamic Chain Configuration: The chain's configuration is dynamic and can be easily
changed without altering the internal logic of the handlers. For instance, if you wanted to add a
new handler or change the order of the existing handlers, you could do so by simply reconfiguring
the chain during instantiation, without needing to modify the handler classes themselves.
📌 Extensibility: If a new type of handling or check is required, a new handler class can be
created without altering the existing handlers. This new handler can then be integrated into the
chain as needed. For example, if there's a need to add a handler that checks for the user's region,
a RegionCheckHandler can be created and added to the chain.
94
In summary, the provided code example adheres to the Chain of Responsibility pattern by
ensuring decoupling between the sender and receivers, maintaining a clear chain of handlers,
adhering to the Single Responsibility Principle, providing the ability to stop or continue the chain
based on conditions, allowing for dynamic chain configuration, and ensuring extensibility.
Example -2
Let's consider a more complex scenario: an e-commerce order processing system. When an order
is placed, it goes through various stages:
5. Shipping: If all previous steps are successful, prepare the order for shipping.
Here's how we can implement this using the Chain of Responsibility pattern:
class Handler:
def __init__(self, successor=None):
self.successor = successor
class ValidationHandler(Handler):
def handle(self, order):
if order.get("address") and order.get("items"):
print("Order validation successful!")
super().handle(order)
else:
print("Order validation failed!")
class DiscountHandler(Handler):
def handle(self, order):
if order.get("loyalty_member"):
order["total"] *= 0.9 # Apply 10% discount
print("Discount applied!")
super().handle(order)
class StockCheckHandler(Handler):
def handle(self, order):
items_in_stock = True
for item in order.get("items", []):
if item["stock"] <= 0:
items_in_stock = False
print(f"Item {item['name']} is out of stock!")
break
if items_in_stock:
95
super().handle(order)
class PaymentHandler(Handler):
def handle(self, order):
if order.get("payment_method") == "credit_card" and order.get("total") <=
order.get("credit_limit"):
print("Payment processed successfully!")
super().handle(order)
else:
print("Payment processing failed!")
class ShippingHandler(Handler):
def handle(self, order):
print(f"Order for {order['address']} is ready for shipping!")
# Client code
order = {
"address": "123 Main St",
"items": [{"name": "laptop", "stock": 5}, {"name": "mouse", "stock": 10}],
"loyalty_member": True,
"payment_method": "credit_card",
"total": 1000,
"credit_limit": 1500
}
chain =
ValidationHandler(DiscountHandler(StockCheckHandler(PaymentHandler(ShippingHandle
r()))))
chain.handle(order)
The DiscountHandler checks if the user is a loyalty member and applies a discount if they
are.
The PaymentHandler processes the payment, ensuring the user has enough credit.
The ShippingHandler prepares the order for shipping if all previous steps are successful.
Each handler in the chain has a specific responsibility and passes the order to the next handler if
its conditions are met. If any handler finds an issue (e.g., an item is out of stock), it won't pass the
order to its successor.
96
Let's see how the above code example adheres to the
principles and requirements of the Chain of Responsibility
design pattern in Python
📌 Decoupling of Sender and Receiver: The client that initiates the order processing doesn't
know which handler in the chain will process the order or in which sequence. The client only
interacts with the head of the chain ( ValidationHandler in this case) and remains decoupled
from the rest of the handlers. This is evident from the client code where the order is passed to the
chain without specifying individual handlers:
chain =
ValidationHandler(DiscountHandler(StockCheckHandler(PaymentHandler(ShippingHandle
r()))))
chain.handle(order)
📌 Single Responsibility Principle: Each handler in the chain has a distinct responsibility: -
ValidationHandler : Validates order details. - DiscountHandler : Applies eligible discounts. -
StockCheckHandler : Checks stock availability for items. - PaymentHandler : Processes the
payment. - ShippingHandler : Prepares the order for shipping. This modular approach ensures
that each handler focuses on one specific task, making the system organized and maintainable.
📌 Stopping or Continuing the Chain: Handlers in the chain have the discretion to stop further
processing based on certain conditions. For instance, if ValidationHandler finds the order
details incomplete, it won't pass the order to its successor. Similarly, if StockCheckHandler
identifies an out-of-stock item, it won't proceed to the PaymentHandler . This behavior is evident
in sections like:
if item["stock"] <= 0:
items_in_stock = False
print(f"Item {item['name']} is out of stock!")
break
📌 Dynamic Chain Configuration: The sequence and composition of the chain can be modified
without altering the internal logic of individual handlers. If a new processing step is needed (e.g., a
tax calculation handler), it can be added to the chain without modifying existing handler classes.
📌 Extensibility: The design allows for easy addition of new handlers. If there's a need to
introduce a new step in the order processing (e.g., a gift-wrapping handler), a new handler class
can be created and integrated into the chain seamlessly.
In conclusion, the e-commerce order processing system code adheres to the Chain of
Responsibility pattern by ensuring a clear sequence of handlers, maintaining the Single
Responsibility Principle, allowing handlers to decide whether to continue or stop the chain, and
offering flexibility in chain configuration and extensibility.
97
🐍🚀 Command Design Pattern in Python. 🐍🚀
Command Design Pattern is a behavioral design pattern that turns a request into a stand-alone
object that contains all information about the request. This transformation lets you pass requests
as a method arguments, delay or queue a request's execution, and support undoable operations.
📌 The Command Design Pattern is a behavioral pattern that encapsulates a request as an object,
thereby allowing users to parameterize clients with different requests, queue requests, and
support operations like undo and redo. It decouples the sender from the receiver.
📌 Decoupling: The pattern decouples the object that invokes the command (often referred to as
the sender) from the object that knows how to execute the command (the receiver). This
separation provides flexibility in terms of the operations that can be performed without having to
change existing code.
📌 Use Cases:
Menu Systems: Imagine a GUI application with a menu. Each menu item is a command.
When you select a menu item, it executes a command. By using the Command Pattern, you
can easily add new menu items without changing existing code.
Undo/Redo: Text editors or graphic design software often have undo and redo
functionalities. Each action on the document can be a command. When you want to undo an
action, you simply call the undo method on the command.
Task Scheduling: In systems where tasks need to be scheduled, like cron jobs, each task can
be a command. The scheduler simply executes the command when the time comes.
98
Macro Recording: Some software allows users to record a series of actions as a macro. Each
action is a command. The macro simply consists of a list of commands that can be played
back in order.
class Light:
def turn_on(self):
print("Light is ON")
def turn_off(self):
print("Light is OFF")
class Fan:
def start(self):
print("Fan is STARTED")
def stop(self):
print("Fan is STOPPED")
class RemoteControl:
def __init__(self):
self._buttons = {}
light = Light()
fan = Fan()
remote = RemoteControl()
remote.set_command(1, light, "on")
remote.set_command(2, light, "off")
remote.set_command(3, fan, "start")
remote.set_command(4, fan, "stop")
remote.press_button(1)
99
remote.press_button(3)
2. The logic for each command is embedded in the press_button method, making it less
modular and harder to extend.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
@abstractmethod
def undo(self):
pass
class LightOnCommand(Command):
def __init__(self, light):
self._light = light
def execute(self):
self._light.turn_on()
def undo(self):
self._light.turn_off()
class LightOffCommand(Command):
def __init__(self, light):
self._light = light
def execute(self):
self._light.turn_off()
def undo(self):
self._light.turn_on()
class FanStartCommand(Command):
def __init__(self, fan):
self._fan = fan
def execute(self):
self._fan.start()
def undo(self):
self._fan.stop()
100
class FanStopCommand(Command):
def __init__(self, fan):
self._fan = fan
def execute(self):
self._fan.stop()
def undo(self):
self._fan.start()
class RemoteControl:
def __init__(self):
self._buttons = {}
self._undo_command = None
def press_undo(self):
if self._undo_command:
self._undo_command.undo()
light = Light()
fan = Fan()
remote = RemoteControl()
remote.set_command(1, LightOnCommand(light))
remote.set_command(2, LightOffCommand(light))
remote.set_command(3, FanStartCommand(fan))
remote.set_command(4, FanStopCommand(fan))
remote.press_button(1)
remote.press_button(3)
remote.press_undo()
2. Commands are modular and can be easily added or removed without changing the
RemoteControl class.
This refactoring using the Command Design Pattern makes the code more flexible, modular, and
easier to maintain.
101
Let's break down how the refactored code, which
implements the Command Design Pattern, addresses the
issues of the original code.
📌 Issue 1: Tightly Coupled RemoteControl and Devices
Original Code: In the initial code, the RemoteControl class had direct knowledge of the
devices ( Light , Fan ) and their methods ( turn_on , turn_off , start , stop ). This meant
that for every new device or command, the RemoteControl class would need modifications.
Refactored Code: With the Command Design Pattern, the RemoteControl class only
interacts with the Command interface. It doesn't need to know about specific devices or their
methods. The specific commands ( LightOnCommand , LightOffCommand , etc.) encapsulate the
device and its operation. This decouples the RemoteControl from the devices, making the
system more modular.
Refactored Code: Each command now has its own class that implements the Command
interface. The logic for executing the command is encapsulated within the execute method
of these command classes. This makes the code more organized, and adding new commands
becomes as simple as creating a new class that implements the Command interface.
Refactored Code: With the Command Design Pattern, each command class can have an
undo method that defines how to revert the action. The RemoteControl class simply calls
this method to undo the last command. This makes implementing the undo feature
straightforward. Additionally, since commands are now stand-alone objects, queuing them (if
needed in the future) would be much simpler. You could easily store these command objects
in a list (queue) and execute them in order or delay their execution.
In summary, the Command Design Pattern offers a structured way to decouple the invoker
( RemoteControl ) from the receiver (devices like Light and Fan ). By encapsulating each request
as an object, the system becomes more flexible, allowing for easy addition of new commands,
undo operations, and potential queuing of commands. The refactored code is more maintainable,
scalable, and organized compared to the original implementation.
102
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
@abstractmethod
def undo(self):
pass
# Concrete Command
class LightOnCommand(Command):
def __init__(self, light):
self.light = light
def execute(self):
self.light.turn_on()
def undo(self):
self.light.turn_off()
class LightOffCommand(Command):
def __init__(self, light):
self.light = light
def execute(self):
self.light.turn_off()
def undo(self):
self.light.turn_on()
# Receiver
class Light:
def turn_on(self):
print("Light is ON")
def turn_off(self):
print("Light is OFF")
# Invoker
class RemoteControl:
def __init__(self):
self.command = None
def press_button(self):
self.command.execute()
def press_undo(self):
self.command.undo()
# Client Code
light = Light()
light_on = LightOnCommand(light)
103
light_off = LightOffCommand(light)
remote = RemoteControl()
remote.set_command(light_on)
remote.press_button() # Light is ON
remote.set_command(light_off)
remote.press_button() # Light is OFF
remote.press_undo() # Light is ON
📌 Receiver: The Light class is the receiver. It's the object that performs the actual action. In this
case, it can turn a light on or off.
📌 Invoker: The RemoteControl class is the invoker. It's the object that triggers the command. It
doesn't know anything about the concrete command, only about the command interface.
📌 Client Code: This is where everything comes together. We create a light, commands to turn it
on and off, and a remote control. We then set commands on the remote and press its buttons to
execute or undo actions.
In essence, the Command Pattern allows us to encapsulate method invocations, decouple senders
from receivers, and offer additional functionalities like undo and redo. This pattern is immensely
powerful and is widely used in software design.
104
How the Smart Home Code Implements These Principles:
📌 Encapsulate a request as an object:
In the code, each command (like turning the light on or off) is encapsulated as an object.
Specifically, LightOnCommand and LightOffCommand are objects that encapsulate the "turn
on" and "turn off" requests respectively.
Each of these command objects has an execute() method that carries out the request and
an undo() method to reverse it.
The receiver is the Light class. It's the one that knows how to turn the light on or off.
This means you can easily add more commands (like commands for fans, thermostats, etc.)
and set them to the remote without changing the RemoteControl class.
The RemoteControl has a press_undo() method that calls the undo() method of the
currently set command.
3. Receiver ( Light class): This is the actual object that performs the action. It has methods to
turn the light on and off.
4. Invoker ( RemoteControl class): This is the object that triggers the command. It has:
The remote's button is pressed using press_button() , which in turn calls the
execute() method of the command.
In essence, the Command Pattern in this code allows the actions (like turning the light on or off) to
be represented as objects. These objects can be passed around, stored, and executed as needed,
providing a flexible and decoupled system.
📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.
📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.
📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.
class AbstractClass(ABC):
@abstractmethod
def abstract_method(self):
pass
class DerivedClass(AbstractClass):
106
# Notice we are not providing an implementation for abstract_method
pass
class AnotherDerivedClass(AbstractClass):
If you try:
obj = AnotherDerivedClass()
obj.abstract_method() # This will print: "Implemented abstract_method in
AnotherDerivedClass"
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class PlayCommand(Command):
def __init__(self, player):
self.player = player
107
def execute(self):
self.player.play()
class PauseCommand(Command):
def __init__(self, player):
self.player = player
def execute(self):
self.player.pause()
class StopCommand(Command):
def __init__(self, player):
self.player = player
def execute(self):
self.player.stop()
class RewindCommand(Command):
def __init__(self, player):
self.player = player
def execute(self):
self.player.rewind()
# Receiver
class MediaPlayer:
def play(self):
print("Playing the track.")
def pause(self):
print("Paused the track.")
def stop(self):
print("Stopped the track.")
def rewind(self):
print("Rewinded the track to the beginning.")
# Client Code
player = MediaPlayer()
menu = Menu()
108
menu.set_command("play", PlayCommand(player))
menu.set_command("pause", PauseCommand(player))
menu.set_command("stop", StopCommand(player))
menu.set_command("rewind", RewindCommand(player))
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have an execute() method.
📌 Receiver ( MediaPlayer class): This is the actual object that performs the action. It has
methods to play, pause, stop, and rewind tracks.
📌 Invoker ( Menu class): This represents the menu system. It has: - A dictionary ( commands ) to
store menu items and their associated commands. - A set_command() method to add menu
items and their commands. - A select() method that simulates a user selecting a menu item. It
calls the execute() method of the associated command.
📌 Client Code: This is where everything is tied together. - A MediaPlayer object is created. - A
Menu object is created. - Commands are created and associated with menu items using
set_command() . - The user selects menu items using the select() method, which in turn calls
the appropriate action on the media player.
This design allows for easy addition of new menu items and their associated actions without
changing the existing code. For instance, if you wanted to add a "fast forward" feature, you'd
simply create a new command for it and add it to the menu.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
@abstractmethod
def undo(self):
109
pass
# Concrete Commands
class AddTextCommand(Command):
def __init__(self, editor, text):
self.editor = editor
self.text = text
self.prev_text = ""
def execute(self):
self.prev_text = self.editor.content
self.editor.content += self.text
def undo(self):
self.editor.content = self.prev_text
class DeleteTextCommand(Command):
def __init__(self, editor, length):
self.editor = editor
self.length = length
self.prev_text = ""
def execute(self):
self.prev_text = self.editor.content
self.editor.content = self.editor.content[:-self.length]
def undo(self):
self.editor.content = self.prev_text
# Receiver
class TextEditor:
def __init__(self):
self.content = ""
def undo(self):
if not self.history:
return
command = self.history.pop()
command.undo()
self.redo_stack.append(command)
def redo(self):
if not self.redo_stack:
return
command = self.redo_stack.pop()
110
command.execute()
self.history.append(command)
# Client Code
editor = TextEditor()
invoker = CommandInvoker()
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have execute() and undo() methods.
📌 Receiver ( TextEditor class): This represents the text editor. It has a content attribute that
stores the current text.
📌 Invoker & History ( CommandInvoker class): This class manages the execution of commands
and maintains a history for undo and redo operations. It has: - A history list to store executed
commands. - A redo_stack list to store commands that can be redone. - An execute() method
to execute a command and add it to the history. - An undo() method to undo the last command. -
A redo() method to redo the last undone command.
📌 Client Code: This is where everything is tied together. - A TextEditor object is created. - A
CommandInvoker object is created. - Commands are created and executed using the invoker. - The
undo and redo functionalities are demonstrated.
This design allows for easy tracking of changes in the text editor. Each action (like adding or
deleting text) is encapsulated as a command, and the invoker maintains a history of these
commands. The undo and redo operations simply navigate this history, executing or undoing
commands as needed.
111
Example 4 - Real life use case of Command Design Pattern in
Python : Task Scheduling:
Let's design a simple task scheduler using the Command Design Pattern. This scheduler will allow
tasks to be scheduled for execution after a certain delay.
import time
from abc import ABC, abstractmethod
from threading import Timer
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class PrintMessageCommand(Command):
def __init__(self, message):
self.message = message
def execute(self):
print(self.message)
class BackupDatabaseCommand(Command):
def execute(self):
# Simulating database backup
print("Database backed up successfully!")
# Scheduler (Invoker)
class TaskScheduler:
def __init__(self):
self.tasks = []
# Client Code
scheduler = TaskScheduler()
112
# Schedule a database backup after 10 seconds
cmd2 = BackupDatabaseCommand()
scheduler.schedule(10, cmd2)
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects. It
ensures that all commands will have an execute() method.
📌 Scheduler (Invoker) ( TaskScheduler class): This class manages the scheduling and
execution of tasks. It has: - A tasks list to store scheduled tasks. - A schedule() method to
schedule a command for execution after a certain delay. - A cancel() method to cancel a
scheduled task.
📌 Client Code: This is where everything is tied together. - A TaskScheduler object is created. -
Commands are created and scheduled for execution using the scheduler.
The Timer class from the threading module is used to simulate the scheduling. When a
command is scheduled, a new timer is created with the specified delay and the command's
execute() method as the callback. When the timer expires, the command is executed.
This design allows for easy scheduling and execution of tasks. Each task is encapsulated as a
command, and the scheduler manages the execution of these commands based on the specified
delays.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class DrawCircleCommand(Command):
def __init__(self, canvas, position):
self.canvas = canvas
self.position = position
113
def execute(self):
self.canvas.draw("Circle", self.position)
class DrawSquareCommand(Command):
def __init__(self, canvas, position):
self.canvas = canvas
self.position = position
def execute(self):
self.canvas.draw("Square", self.position)
class ChangeColorCommand(Command):
def __init__(self, canvas, color):
self.canvas = canvas
self.color = color
def execute(self):
self.canvas.change_color(self.color)
# Receiver
class Canvas:
def __init__(self):
self.color = "White"
def run(self):
for command in self.commands:
command.execute()
# Client Code
canvas = Canvas()
macro = Macro()
114
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.
📌 Receiver ( Canvas class): This simulates a canvas where shapes can be drawn and colors can
be changed. It has methods to draw shapes ( draw() ) and change colors ( change_color() ).
📌 Macro & Invoker ( Macro class): This class represents a macro that can record and play back
a series of commands. It has: - A commands list to store recorded commands. - An add_command()
method to add a command to the macro. - A run() method to play back the recorded commands
in order.
📌 Client Code: This is where everything is tied together. - A Canvas object is created. - A Macro
object is created. - Commands are created and added to the macro to simulate recording a series
of actions. - The macro is then run to play back the recorded actions.
This design allows for easy recording and playback of macros. Each action is encapsulated as a
command, and the macro maintains a list of these commands. When the macro is run, it simply
plays back the commands in the order they were recorded.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class PlaceOrderCommand(Command):
def __init__(self, order, product, quantity):
self.order = order
self.product = product
self.quantity = quantity
def execute(self):
self.order.place(self.product, self.quantity)
class ModifyOrderCommand(Command):
def __init__(self, order, product, new_quantity):
self.order = order
115
self.product = product
self.new_quantity = new_quantity
def execute(self):
self.order.modify(self.product, self.new_quantity)
class CancelOrderCommand(Command):
def __init__(self, order):
self.order = order
def execute(self):
self.order.cancel()
# Receiver
class Order:
def __init__(self):
self.items = {}
def cancel(self):
self.items.clear()
print("Order has been canceled.")
# Client Code
order1 = Order()
platform = ECommercePlatform()
# Place an order
cmd1 = PlaceOrderCommand(order1, "Laptop", 1)
platform.process_command(cmd1)
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.
📌 Receiver ( Order class): This represents an e-commerce order. It has methods to place an
order ( place() ), modify an existing order ( modify() ), and cancel an order ( cancel() ).
📌 Client Code: This is where everything is tied together. - An Order object is created. - An
ECommercePlatform object is created. - Commands are created and processed using the platform,
simulating placing, modifying, and canceling an order.
This design allows for easy tracking of orders and their modifications. Each action on an order is
encapsulated as a command, and the e-commerce platform maintains a list of these commands.
This can be useful for logging, analytics, and potentially implementing undo functionalities in the
future.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class PreparePizzaCommand(Command):
def __init__(self, chef):
self.chef = chef
def execute(self):
self.chef.prepare_pizza()
class PreparePastaCommand(Command):
117
def __init__(self, chef):
self.chef = chef
def execute(self):
self.chef.prepare_pasta()
class PrepareSaladCommand(Command):
def __init__(self, chef):
self.chef = chef
def execute(self):
self.chef.prepare_salad()
# Receiver
class Chef:
def prepare_pizza(self):
print("Chef is preparing pizza...")
def prepare_pasta(self):
print("Chef is preparing pasta...")
def prepare_salad(self):
print("Chef is preparing salad...")
def process_orders(self):
while self.queue:
order = self.queue.pop(0)
order.execute()
# Client Code
chef_john = Chef()
kitchen = Kitchen()
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.
118
📌 Concrete Commands: The PreparePizzaCommand , PreparePastaCommand , and
PrepareSaladCommand classes implement the Command interface. They encapsulate specific
actions like preparing pizza, pasta, or salad.
📌 Receiver ( Chef class): This represents a chef in the kitchen. The chef has methods to prepare
different types of dishes: prepare_pizza() , prepare_pasta() , and prepare_salad() .
📌 Kitchen Interface (Invoker) ( Kitchen class): This class represents the kitchen where orders
are processed. It has: - A queue list to store orders (commands) that need to be processed. - An
add_order() method to add an order (command) to the queue. - A process_orders() method
to process (execute) all orders in the queue.
📌 Client Code: This is where everything is tied together. - A Chef object is created. - A Kitchen
object is created. - Orders (commands) are added to the kitchen queue, simulating customers
placing orders. - The kitchen processes (executes) the orders, simulating the chef preparing the
dishes.
This design allows for easy management of orders in the kitchen. Each type of dish is
encapsulated as a command, and the kitchen maintains a queue of these commands. As orders
come in, they're added to the queue, and the kitchen processes them in the order they were
received. This can be useful for tracking, logging, and ensuring that dishes are prepared in the
correct sequence.
import os
from abc import ABC, abstractmethod
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class CreateFileCommand(Command):
def __init__(self, filepath, content=None):
self.filepath = filepath
self.content = content
def execute(self):
with open(self.filepath, 'w') as file:
if self.content:
file.write(self.content)
print(f"File '{self.filepath}' created.")
class ReadFileCommand(Command):
def __init__(self, filepath):
self.filepath = filepath
119
def execute(self):
with open(self.filepath, 'r') as file:
print(file.read())
class RenameFileCommand(Command):
def __init__(self, old_filepath, new_filepath):
self.old_filepath = old_filepath
self.new_filepath = new_filepath
def execute(self):
os.rename(self.old_filepath, self.new_filepath)
print(f"File '{self.old_filepath}' renamed to '{self.new_filepath}'.")
class DeleteFileCommand(Command):
def __init__(self, filepath):
self.filepath = filepath
def execute(self):
os.remove(self.filepath)
print(f"File '{self.filepath}' deleted.")
# Client Code
create_cmd = CreateFileCommand('sample.txt', 'Hello, Command Pattern!')
create_cmd.execute()
read_cmd = ReadFileCommand('sample.txt')
read_cmd.execute()
delete_cmd = DeleteFileCommand('new_sample.txt')
delete_cmd.execute()
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.
📌 Client Code: This is where everything is tied together. Commands are created and executed,
simulating the operations of creating, reading, renaming, and deleting a file.
120
2. Decoupling: The file operations (commands) are decoupled from the code that invokes
them. This means that the client code doesn't need to know the specifics of how each
operation is implemented.
3. Flexibility: With this design, it's easy to add features like undo/redo, logging, or macro
recording. For instance, you could maintain a history of commands and provide an undo
feature by reversing each command's action.
5. Consistency: By using commands, you ensure that all file operations are executed in a
consistent manner, following the same pattern.
Overall, the Command Design Pattern provides a structured and scalable way to handle various
operations, making the system easier to maintain and extend.
# Command Interface
class Command(ABC):
@abstractmethod
def execute(self):
pass
# Concrete Commands
class CutCommand(Command):
def __init__(self, video_editor, start_time, end_time):
self.video_editor = video_editor
self.start_time = start_time
self.end_time = end_time
def execute(self):
self.video_editor.cut(self.start_time, self.end_time)
class AddEffectCommand(Command):
def __init__(self, video_editor, effect_name):
self.video_editor = video_editor
self.effect_name = effect_name
def execute(self):
self.video_editor.add_effect(self.effect_name)
class AdjustBrightnessCommand(Command):
def __init__(self, video_editor, level):
self.video_editor = video_editor
121
self.level = level
def execute(self):
self.video_editor.adjust_brightness(self.level)
# Receiver
class VideoEditor:
def cut(self, start_time, end_time):
print(f"Cutting video from {start_time} to {end_time}.")
# Client Code
video = VideoEditor()
suite = EditingSuite()
Explanation:
📌 Command Interface ( Command class): This is the blueprint for all command objects, ensuring
that all commands will have an execute() method.
📌 Concrete Commands: - CutCommand : This command cuts a segment from the video. -
AddEffectCommand : This command adds a specific effect to the video. -
AdjustBrightnessCommand : This command adjusts the brightness of the video.
📌 Receiver ( VideoEditor class): This represents the video editing software. It has methods to
perform various editing actions on a video.
122
📌 Editing Suite (Invoker) ( EditingSuite class): This class represents the suite where editing
commands are executed. It has: - A history list to store executed commands. - An
execute_command() method to execute a command and add it to the history.
📌 Client Code: This is where everything is tied together. Commands are created and executed,
simulating the operations of cutting a segment, adding an effect, and adjusting brightness.
2. Undo/Redo: With the history list in the EditingSuite , it's straightforward to implement
undo and redo functionalities by reversing or re-executing commands.
3. Batch Processing: Commands can be grouped together to apply multiple editing operations
at once, allowing for batch processing of videos.
5. Consistency: By using commands, you ensure that all editing operations are executed in a
consistent manner, following the same pattern.
Overall, the Command Design Pattern provides a structured approach to handle various video
editing operations, making the software more flexible and user-friendly.
123
📌 The facade design pattern helps us to hide the internal complexity of our systems and expose
only what is necessary to the client through a simplified interface. In essence, a facade is an
abstraction layer implemented over an existing complex system. The main goal is to simplify the
client's interaction with a complex system by providing a higher-level interface that makes the
subsystem easier to use.
There are three points which are involved to understand this patter.
Facade class --- This class is for implementing interface which will be used by client class. This
class will use services implemented in system
System class --- Multiple system classes might be there in the system and each system class is for
specific purpose.
Client class --- Client class is using facade class to access functionality of system. It could be hard
to access system class directly so client is using facade class instead.
📌 Use Cases:
1. When you have a complex system with multiple modules and you want to provide a simple
interface to the client.
3. When you want to layer your subsystems and want to ensure that each layer communicates
with only a few interfaces.
📌 Why it's important in Python: Python, being a high-level language, often deals with
abstracting complexities. Libraries and frameworks in Python often use the facade pattern to
provide a more Pythonic and user-friendly API to the users, while hiding the intricate details and
complexities.
124
Let's see an example WITH and then WITHOUT the "Facade
design pattern in Python"
class CPU:
def freeze(self):
print("CPU is frozen")
def execute(self):
print("CPU is executing")
class Memory:
def load(self, position, data):
print(f"Loading data {data} at position {position}")
class HardDrive:
def read(self, lba, size):
return f"Reading {size} bytes from LBA {lba}"
If a client wants to start the computer, they would need to interact with all these subsystems in a
specific order:
cpu = CPU()
memory = Memory()
hard_drive = HardDrive()
cpu.freeze()
memory.load("0x00", hard_drive.read("0x00", "512"))
cpu.jump("0x00")
cpu.execute()
📌 The above approach has the following issues: - The client needs to interact with multiple
subsystems directly. - The order of operations is crucial, and the client needs to be aware of it. - If
any subsystem changes its interface or behavior, the client code will need to be updated.
125
class ComputerFacade:
def __init__(self):
self.cpu = CPU()
self.memory = Memory()
self.hard_drive = HardDrive()
def start(self):
self.cpu.freeze()
self.memory.load("0x00", self.hard_drive.read("0x00", "512"))
self.cpu.jump("0x00")
self.cpu.execute()
Now, the client can simply interact with the ComputerFacade to start the computer:
computer = ComputerFacade()
computer.start()
📌 Benefits of using the Facade design pattern: - The client interacts with a single, simplified
interface ( ComputerFacade ) rather than multiple subsystems. - The internal workings of the
subsystems are abstracted away from the client. - It's easier to maintain and modify the system
without affecting the client code.
📌 In summary, the Facade design pattern provides a unified interface to a set of interfaces in a
subsystem. It defines a higher-level interface that makes the subsystem easier to use, promoting
decoupling and cleaner code.
Let's delve deeper into how the refactored code with the
Facade design pattern addresses the issues of the original
code.
📌 Issue 1: Direct Interaction with Multiple Subsystems
In the original code, the client had to interact directly with multiple subsystems ( CPU , Memory ,
HardDrive ). This means that the client needed to have knowledge about the intricacies and
operations of each subsystem.
Solution with Facade Pattern: The ComputerFacade class encapsulates the interactions with the
subsystems. The client only interacts with the ComputerFacade , which internally manages the
interactions with the subsystems. This reduces the client's dependency on individual subsystems
and abstracts away the complexity.
Solution with Facade Pattern: The ComputerFacade class ensures that the methods are called in
the correct order within its start method. The client doesn't need to worry about the order; they
just call the start method on the facade. This encapsulation ensures that the operations are
always executed in the correct sequence.
Solution with Facade Pattern: With the facade in place, any changes to the subsystems can be
managed within the facade itself. The client remains unaffected as it only interacts with the
facade's interface. This promotes loose coupling, where changes in one part of the system don't
ripple through and affect other parts.
2. Flexibility: In the future, if we want to add more operations or change the way the computer
starts, we can do so within the facade without affecting the client code.
3. Maintainability: With the separation of concerns, it's easier to maintain and modify the
system. If a subsystem needs an upgrade or modification, it can be done without touching
the client code.
In essence, the Facade design pattern in the refactored code provides a shield to the client from
the complexities of the subsystems, ensuring a smooth and simplified interaction.
📌 Real-life Use-Case: Imagine you're building a home automation system. This system has
multiple subsystems like lighting, security, heating, etc. Each of these subsystems can have its own
set of methods and complexities. Instead of letting the client deal with each subsystem separately,
you can provide a facade that offers simplified methods to perform common tasks.
class LightingSystem:
def turn_on(self):
print("Lights turned on")
def turn_off(self):
print("Lights turned off")
class SecuritySystem:
def activate(self):
print("Security system activated")
def deactivate(self):
print("Security system deactivated")
class HeatingSystem:
def set_temperature(self, temp):
print(f"Temperature set to {temp}°C")
class HomeAutomationFacade:
def __init__(self):
self.lighting = LightingSystem()
self.security = SecuritySystem()
self.heating = HeatingSystem()
def leave_home(self):
self.lighting.turn_off()
self.security.activate()
self.heating.set_temperature(18)
# Set to energy-saving mode
127
def arrive_home(self):
self.lighting.turn_on()
self.security.deactivate()
self.heating.set_temperature(22)
# Comfortable temperature
# Client code
home_system = HomeAutomationFacade()
home_system.leave_home()
home_system.arrive_home()
📌 Under-the-Hood: The facade pattern doesn't change the subsystems; it only provides a
simplified view of them. This is beneficial because it promotes decoupling. The client is decoupled
from the subsystems, meaning changes in the subsystems won't affect the client as long as the
facade's interface remains consistent. This is a principle in software design known as the "Law of
Demeter" or the "principle of least knowledge", which promotes minimal knowledge of an object
about other connected objects.
In Python, the facade pattern is often seen in libraries and frameworks where the internal
workings are abstracted away, and only a simple, intuitive, and Pythonic API is exposed to the end-
users. This ensures that users can quickly and effectively use the tool without needing to
understand its complexities.
Let's break down the provided code example and see how it
aligns with the principles and requirements of the facade
pattern.
📌 Unified Interface: The primary goal of the facade pattern is to provide a unified interface to a
set of interfaces in a subsystem. In the provided example, the HomeAutomationFacade class
serves as this unified interface. It wraps around multiple subsystems ( LightingSystem ,
SecuritySystem , and HeatingSystem ) and offers a simpler, more intuitive set of methods
( leave_home and arrive_home ) for the client to use.
class HomeAutomationFacade:
def __init__(self):
self.lighting = LightingSystem()
self.security = SecuritySystem()
self.heating = HeatingSystem()
def leave_home(self):
128
self.lighting.turn_off()
self.security.activate()
self.heating.set_temperature(18)
# Set to energy-saving mode
def arrive_home(self):
self.lighting.turn_on()
self.security.deactivate()
self.heating.set_temperature(22)
# Comfortable temperature
class LightingSystem:
def turn_on(self):
print("Lights turned on")
def turn_off(self):
print("Lights turned off")
class SecuritySystem:
def activate(self):
print("Security system activated")
def deactivate(self):
print("Security system deactivated")
class HeatingSystem:
def set_temperature(self, temp):
print(f"Temperature set to {temp}°C")
📌 Decoupling: One of the benefits of the facade pattern is the decoupling of the client from the
complex subsystems. In our example, the client interacts only with the HomeAutomationFacade
and remains unaware of the individual subsystems. This means that if there are changes or
updates to the subsystems (e.g., adding new methods or changing internal logic), the client code
remains unaffected as long as the facade's interface ( leave_home and arrive_home methods)
remains consistent.
📌 Flexibility and Ease of Use: The facade pattern makes it easier for clients to use the system.
Instead of calling multiple methods from different subsystems, the client has a single point of
interaction with the system through the facade. In our example, the client doesn't need to
remember the sequence of operations or which subsystem does what. They just call leave_home
when leaving and arrive_home when arriving.
📌 Maintainability: If in the future, we decide to add a new subsystem (e.g., an air conditioning
system) or modify the behavior of an existing one, we can do so within the facade without
affecting the client code. This encapsulation ensures that our system remains maintainable and
scalable.
129
📌 Consistency: The facade ensures that every time a client calls a method like leave_home , the
same sequence of actions is executed consistently. This prevents potential errors or
inconsistencies that might arise if the client had to manually manage each subsystem.
In summary, the provided code example encapsulates the complexities of multiple subsystems
within a unified interface, adhering to the principles of the facade pattern. It offers a simplified
and consistent interaction for the client, promotes decoupling, and ensures maintainability and
flexibility for future changes.
# pylint: disable=too-few-public-methods
# "The Facade pattern concept"
class SubSystemClassA:
"A hypothetically complicated class"
@staticmethod
def method():
"A hypothetically complicated method"
return "A"
class SubSystemClassB:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value
class SubSystemClassC:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value
class Facade():
"A simplified facade offering the services of subsystems"
@staticmethod
def sub_system_class_a():
"Use the subsystems method"
return SubSystemClassA().method()
@staticmethod
def sub_system_class_b(value):
"Use the subsystems method"
return SubSystemClassB().method(value)
@staticmethod
130
def sub_system_class_c(value):
"Use the subsystems method"
return SubSystemClassC().method(value)
# The Client
# call potentially complicated subsystems directly
print(SubSystemClassA.method())
print(SubSystemClassB.method("B"))
print(SubSystemClassC.method({"C": [1, 2, 3]}))
""" OUTPUTS
A
B
{'C': [1, 2, 3]}
A
B
{'C': [1, 2, 3]}
"""
Alright, let's dissect the provided code step by step to understand its implementation of the facade
design pattern.
1. SubSystemClassA has a static method method that returns the string "A".
2. SubSystemClassB has a static method method that takes a value and returns it.
📌 Façade Class: The Facade class is where the facade pattern is implemented. This class
provides a simplified interface to the methods of the sub-systems. Instead of clients having to
interact with each sub-system directly, they can use the methods provided by the Facade class.
📌 Client Interaction: The client (or the user of these classes) has two ways to interact with the
sub-systems:
1. Direct Interaction: The client can call the methods of the sub-systems directly. This is
demonstrated in the lines:
131
print(SubSystemClassA.method())
print(SubSystemClassB.method("B"))
print(SubSystemClassC.method({"C": [1, 2, 3]}))
Here, the client is directly accessing each sub-system and its methods.
2. Through the Façade: The client can use the Facade class to interact with the sub-systems.
This is demonstrated in the lines:
print(Facade().sub_system_class_a())
print(Facade().sub_system_class_b("B"))
print(Facade().sub_system_class_c({"C": [1, 2, 3]}))
Here, the client is using the simplified interface provided by the Facade class to achieve the
same results as the direct interaction.
📌 Outputs: The outputs of both direct interaction and interaction through the facade are the
same, as shown in the provided output. This demonstrates that the facade provides the same
functionality as the direct interaction but offers a more simplified and unified interface.
📌 Because a static method doesn't depend on instance attributes, it can be called on the class
itself, without creating an instance. For example, if you have a class MyClass with a static method
my_method , you can call it like this: MyClass.my_method() .
In the provided code, @staticmethod is used to define methods that don't require access to
instance-specific data or methods. This means that these methods can be called on the class itself,
without creating an instance of the class.
class SubSystemClassA:
"A hypothetically complicated class"
@staticmethod
def method():
"A hypothetically complicated method"
return "A"
class SubSystemClassB:
"A hypothetically complicated class"
@staticmethod
def method(value):
"A hypothetically complicated method"
return value
132
The reason for using @staticmethod in this context:
2. Simplicity: By making these methods static, you can call them directly on the class without
creating an instance. This makes the client code simpler and more intuitive. For example, the
client can call SubSystemClassA.method() directly without first creating an instance of
SubSystemClassA .
However, it's worth noting that in the Facade methods, instances of the subsystem classes are
being created to call their methods ( SubSystemClassA().method() ). This is redundant since the
subsystem methods are static and can be called directly on the class. The more streamlined
approach would be SubSystemClassA.method() without creating an instance.
In summary, @staticmethod is used in this code to define methods that don't rely on instance-
specific data or methods, allowing for direct method calls on the class itself.
📌 Conclusion: The provided code is a clear implementation of the facade design pattern. The
Facade class abstracts the complexities and specifics of the sub-systems, offering a unified and
simplified interface for the client. The client can choose to interact directly with the sub-systems or
use the facade, but the facade's main advantage is that it provides a consistent and
straightforward way to access the functionalities of the sub-systems.
📌 In object-oriented programming, "state" usually refers to the data stored in object attributes.
For instance, if you have a Car class, the "state" might include attributes like color , make , and
speed . Methods that change these attributes or behave differently based on these attributes are
considered "stateful."
📌 By making them static, the code signals to the reader that these methods are utility-like
methods that don't rely on the state of an instance. This can make the code easier to understand
and reason about.
I hope this clears up any confusion! Would you like to know more about any specific part?
133
Example 2 - Real life use case of Design Pattern in Python
Let's consider a real-life scenario involving a multimedia system in a smart home. This system can
control the TV, the audio system, and even streaming services. Each of these components can
have its own set of operations and complexities. We'll use the facade pattern to simplify the
interaction with this multimedia system.
# Subsystems
class Television:
def turn_on(self):
print("TV turned on")
def turn_off(self):
print("TV turned off")
class AudioSystem:
def turn_on(self):
print("Audio system turned on")
def turn_off(self):
print("Audio system turned off")
class StreamingService:
def login(self, username, password):
print(f"Logged in as {username}")
# Facade
class MultimediaSystemFacade:
def __init__(self):
self.tv = Television()
self.audio = AudioSystem()
self.stream = StreamingService()
def end_movie(self):
print("Shutting down after watching movie...")
self.stream.play_movie("Stopping current movie")
self.tv.turn_off()
self.audio.turn_off()
# Client code
multimedia_system = MultimediaSystemFacade()
multimedia_system.watch_movie("john_doe", "password123", "Inception")
print("\nMovie ended or interrupted by user.\n")
multimedia_system.end_movie()
📌 Explanation:
1. Subsystems:
Television : Represents the TV with operations to turn it on/off and set a channel.
AudioSystem : Represents the audio or sound system with operations to turn it on/off
and set the volume.
StreamingService : Represents a streaming platform (like Netflix) where you can log in,
search for a movie, and play it.
2. Facade - MultimediaSystemFacade:
This class encapsulates the complexities of the three subsystems. It provides two main
methods: watch_movie and end_movie .
watch_movie : Sets up everything for watching a movie. It turns on the TV and audio
system, logs into the streaming service, searches for the desired movie, and plays it.
end_movie : Used when the movie ends or is interrupted. It stops the movie, turns off
the TV, and shuts down the audio system.
3. Client Code:
The client wants to watch a movie. Instead of interacting with each subsystem
separately (TV, audio system, streaming service), the client uses the
MultimediaSystemFacade to watch and end a movie. This simplifies the process and
ensures that all steps are executed in the correct order.
This example demonstrates how the facade pattern can be used to simplify a complex process by
providing a unified interface that abstracts the underlying complexities.
135
Example 2 - Real life use case of Design Pattern in Python
One of the most well-known Python projects on GitHub is the requests library, which provides
methods to send HTTP requests. While requests doesn't strictly implement the Facade pattern in
the classical sense, it does simplify the process of making HTTP requests in Python by abstracting
away the complexities of lower-level libraries like http.client or urllib .
1. Without requests , making an HTTP GET request might look something like this using
http.client :
import http.client
conn = http.client.HTTPSConnection("www.example.com")
conn.request("GET", "/")
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
import requests
response = requests.get("https://www.example.com/")
print(response.text)
In the above example, the requests.get() method acts as a facade that hides the underlying
complexity of establishing a connection, sending the request, and retrieving the response. The
user doesn't need to know about the details of http.client or any other underlying library --
they just use the simplified interface provided by requests .
While this isn't a textbook example of the Facade pattern, it demonstrates the core principle:
providing a simpler, unified interface to a more complex underlying system.
https://github.com/psf/requests
Once you're there, you can navigate through the source code to see how the library abstracts
away the complexities of making HTTP requests in Python. The main logic is contained within the
requests folder in the repository. The api.py file, in particular, provides the high-level functions
like get() , post() , etc., that most users interact with.
The flyweight pattern addresses performance challenges in object-oriented systems due to the
cost of object instantiation. These challenges often arise in resource-constrained environments
like smartphones or in systems with a vast number of concurrent objects and users. This pattern
promotes memory efficiency by maximizing resource sharing among similar objects.
137
Creating a new object requires additional memory allocation. While virtual memory theoretically
offers limitless memory, practical constraints exist. If a system's physical memory is fully utilized, it
swaps data with secondary storage, typically an HDD, leading to performance degradation.
Although SSDs outperform HDDs, they aren't universally adopted and won't fully replace HDDs in
the near future. Performance isn't solely about memory. For instance, graphics applications, such
as video games, need to swiftly display 3D content like dense forests or populated urban scenes.
Without data sharing, rendering each 3D object separately would be inefficient. Therefore, instead
of relying on hardware upgrades, software engineers should employ strategies like the flyweight
pattern to optimize memory and boost performance by facilitating data sharing among alike
objects.
Instead of creating thousands of objects that share common attributes, and result in a situation
where a large amount of memory or other resources are used, you can modify your classes to
share multiple instances simultaneously by using some kind of reference to the shared object
instead.
The best example to describe this is a document containing many words and sentences and made
up of many letters. Rather than storing a new object for each individual letter describing its font,
position, colour, padding and many other potential things. You can store just a lookup id of a
character in a collection of some sort and then dynamically create the object with its proper
formatting etc., only as you need to.
The Flyweight pattern is all about sharing to save resources, especially in contexts where objects
have a lot of shared data. Here are some real-world examples:
1. Text Editor:
Shared State (Intrinsic): Each character's glyph representation. For instance, the letter
'A' in font 'Times New Roman' at size '12' will always look the same.
Unique State (Extrinsic): The position of each character in the document, its specific
color if highlighted, etc.
Shared State (Intrinsic): The blueprint of a train model, like the seating arrangement,
number of coaches, etc.
Unique State (Extrinsic): The passengers in each seat, the destination of each train
instance, current speed, etc.
Shared State (Intrinsic): The models for buildings, trees, creatures, and other static
elements that are common throughout the world.
Unique State (Extrinsic): The position, rotation, and scale of each instance of these
models in the world.
Shared State (Intrinsic): Common static assets like logos, standard icons, common
scripts, or stylesheets.
Unique State (Extrinsic): The context in which these assets are loaded, like the specific
user, the page they're on, etc.
138
Shared State (Intrinsic): The software and UI/UX design common to all kiosks.
Unique State (Extrinsic): The current user's flight details, their personal information,
the specific ads or promotions they might see.
6. Digital Art Software:
Shared State (Intrinsic): Common brushes, patterns, and textures that artists can use.
Unique State (Extrinsic): The specific artwork, the layers, the modifications made using
the brushes, etc.
7. E-commerce Platforms:
Shared State (Intrinsic): Product templates, especially for products that have common
designs but different customizations (e.g., T-shirts with different prints).
Unique State (Extrinsic): The specific customizations, the user who ordered, the
quantity, etc.
In all these examples, the Flyweight pattern can be employed to ensure that the shared data
(intrinsic state) is stored once and reused, while the unique data (extrinsic state) is managed
separately. This approach can lead to significant memory savings, especially in systems where the
number of objects can be very large.
Components:
Flyweight: This is the shared object. It contains the shared state (intrinsic state) and methods
to manipulate this state.
ConcreteFlyweight: This is a subclass of Flyweight and includes the specific shared objects.
FlyweightFactory: This is responsible for creating and managing the Flyweight objects. It
ensures that flyweights are shared properly.
4. Intrinsic vs. Extrinsic State: The state of a flyweight object can be divided into two:
Intrinsic State: This is the shared state, stored in the Flyweight object. It's independent of
the Flyweight's context, meaning it doesn't change and can be shared across multiple
contexts.
Extrinsic State: This is the non-shared state, which is stored or computed by client objects.
The client objects pass this state to the Flyweight when they invoke its methods.
A perfect example of the Flyweight Pattern is the Python intern() function. It's a builtin in Python 2
which was moved into the sys module in Python 3. When you pass it a string, it returns an exactly
equal string. Its advantage is that it saves space: no matter how many different string objects you
pass it for a particular value like 'abccdz', it returns the same 'abccdz' object each time.
📌 Issues: 1. Even if two trees have the same attributes, they are stored as separate objects in
memory. 2. This approach is memory-inefficient, especially when dealing with thousands or
millions of objects.
class TreeType:
_instances = {}
class Tree:
def __init__(self, x, y, tree_type):
self.x = x
self.y = y
self.tree_type = tree_type
def display(self):
140
self.tree_type.display(self.x, self.y)
📌 Advantages: 1. We've separated the intrinsic and extrinsic states. The intrinsic state
( TreeType ) is shared among all trees of the same type. 2. Memory usage is significantly reduced
since we're sharing the intrinsic state among multiple objects. 3. The TreeType class uses the
Singleton pattern to ensure that only one instance of a particular tree type exists.
By implementing the Flyweight pattern, we've optimized our game's memory usage without
compromising the functionality.
Let's delve into the details of how the refactored code with
the Flyweight design pattern addresses the issues of the
original code.
📌 Shared Intrinsic State: In the refactored code, we introduced a new class called TreeType .
This class represents the intrinsic state (shared attributes) of the trees, which includes color ,
texture , and height . By doing this, we ensure that for every unique combination of these
attributes, only one instance of TreeType is created and stored in memory.
📌 Singleton Pattern in TreeType : The TreeType class uses a dictionary called _instances to
keep track of the created instances based on the intrinsic attributes. The __new__ method checks
if an instance with the given attributes already exists. If it does, it returns the existing instance;
otherwise, it creates a new one. This ensures that we don't create multiple instances for trees with
the same attributes, thus saving memory.
📌 Separation of Extrinsic State: The Tree class now only holds the extrinsic state, which is the
x and y position of the tree. This separation allows us to have multiple Tree objects with
different positions but share the same TreeType instance if they have the same intrinsic
attributes. This approach drastically reduces the memory footprint, especially when dealing with a
large number of trees with similar attributes.
📌 Memory Efficiency: In the original code, if we had 1000 trees with the same attributes, we
would have 1000 separate objects, each storing its own color , texture , and height . In the
refactored code, these 1000 trees would share a single TreeType instance for their intrinsic
attributes, while only their positions (extrinsic state) would be stored separately. This results in a
significant reduction in memory usage.
📌 Scalability: The Flyweight pattern's memory-saving benefits become even more pronounced
as the number of objects increases. If our game were to have millions of trees, the original
approach would be highly inefficient, leading to excessive memory consumption. With the
Flyweight pattern, the memory usage would grow linearly with the number of unique tree types
(intrinsic states) rather than the total number of trees.
141
📌 Maintainability: By separating the intrinsic and extrinsic states, the code becomes more
modular and easier to maintain. If we need to add more attributes or methods related to the
intrinsic state of the trees, we can do so in the TreeType class without affecting the Tree class,
which deals with the extrinsic state.
In summary, the refactored code with the Flyweight design pattern efficiently addresses the
memory inefficiency issues of the original code by sharing the intrinsic state among similar
objects, leading to significant memory savings, improved scalability, and better maintainability.
Imagine a word processor application. Each character in a document could be an object. If you
consider the number of characters in a large document, the number of objects can be massive.
However, there are only a limited number of unique characters. So, instead of creating a new
object for every character in the document, you can use the Flyweight pattern to create an object
for each unique character and share it every time that character appears in the document.
Python Implementation:
class Character:
def __init__(self, char):
self.char = char # Intrinsic state
class CharacterFactory:
_characters = dict()
# Client code
factory = CharacterFactory()
char_a1 = factory.get_character('A')
char_a1.render('Font1')
char_a2 = factory.get_character('A')
char_a2.render('Font2')
In the above code, even though we requested the character 'A' twice, only one object was created
and shared between the two requests.
142
Let's dissect the provided code example to understand how it adheres to the principles and
requirements of the Flyweight design pattern:
1. Shared State (Intrinsic State): In the code, the Character class represents the Flyweight. The
character itself ( self.char ) is the intrinsic state. This state is shared among all instances of the
same character.
class Character:
def __init__(self, char):
self.char = char # Intrinsic state
2. Non-shared State (Extrinsic State): The render method of the Character class takes a font
parameter. This font is an example of the extrinsic state. It's not stored within the Flyweight object
but is instead passed by the client when needed.
3. Flyweight Factory: The CharacterFactory class acts as the Flyweight Factory. It's responsible
for creating and managing Flyweight objects. When a client requests a character, the factory first
checks if it already has that character. If it does, it returns the existing object; otherwise, it creates
a new one. This ensures that for each unique character, only one object instance is ever created
and maintained.
class CharacterFactory:
_characters = dict()
4. Object Sharing: In the client code, even though we request the character 'A' twice, the factory
ensures that only one object is created for 'A'. This object is then shared for both requests,
adhering to the principle of object sharing in the Flyweight pattern.
char_a1 = factory.get_character('A')
char_a1.render('Font1')
char_a2 = factory.get_character('A')
char_a2.render('Font2')
5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (the
character itself) is kept within the Flyweight, while the extrinsic state (the font) is kept outside and
passed in when needed. This separation allows the intrinsic state to be shared while still providing
flexibility in how the object is used.
143
6. Memory Efficiency: By ensuring that only one object is created for each unique character, the
design minimizes memory usage. If you were to scale this example to represent a document with
thousands or millions of characters, the memory savings would be significant.
In conclusion, the provided code example adheres to the principles of the Flyweight design
pattern by ensuring shared use of objects with intrinsic states, managing object creation and
sharing through a factory, and efficiently handling extrinsic states outside the shared objects.
Example-2 - Real-life Use Case: Let's consider a graphics rendering system for a game or
simulation where we need to display trees in a forest. Each tree can have different types (e.g., oak,
pine, birch) and different states (e.g., position, size, age). However, the graphical representation
(texture, model) of each tree type is shared among all trees of the same type.
import random
# Flyweight class
class TreeType:
def __init__(self, name, texture, color):
self.name = name # Intrinsic state
self.texture = texture # Intrinsic state
self.color = color # Intrinsic state
# Flyweight Factory
class TreeFactory:
_tree_types = dict()
@classmethod
def get_tree_type(cls, name, texture, color):
if name not in cls._tree_types:
tree_type = TreeType(name, texture, color)
cls._tree_types[name] = tree_type
return cls._tree_types[name]
# Client class
class Forest:
def __init__(self):
self.trees = []
def render_forest(self):
for tree_type, x, y, size, age in self.trees:
tree_type.render(x, y, size, age)
# Client code
forest = Forest()
144
# Planting 3 trees of the same type but different sizes and ages
forest.plant_tree(1, 2, "Pine", "PineTexture", "Green", 5, 10)
forest.plant_tree(5, 7, "Pine", "PineTexture", "Green", 6, 12)
forest.plant_tree(8, 9, "Pine", "PineTexture", "Green", 7, 15)
forest.render_forest()
Explanation:
1. TreeType is the Flyweight class. The intrinsic states (name, texture, color) are the shared
attributes among trees of the same type.
2. TreeFactory is the Flyweight Factory. It ensures that for each unique tree type, only one
TreeType object is created.
3. Forest is the client class. It uses the TreeFactory to get tree types and stores them along with
their extrinsic states (position, size, age).
4. In the client code, even though we plant multiple trees of the same type, only one TreeType
object is created for each unique tree type. This ensures memory efficiency, especially if we
were to render thousands or millions of trees.
This design allows us to efficiently render a vast forest with various tree types, sizes, and ages
while minimizing memory usage by sharing the graphical representation of each tree type.
Advantages:
Memory Savings: This is the primary advantage. By sharing objects, you can save a
significant amount of memory, especially in applications where object instantiation is costly.
Faster Operations: Fewer objects mean fewer instances to manage and faster lookups.
7. Disadvantages:
Complexity: Introducing the Flyweight pattern can increase the complexity of the system,
especially when managing the shared and non-shared states.
State Management: The management of the extrinsic state can become cumbersome, as it's
maintained outside the Flyweight.
When to Use:
When you have a large number of similar objects, and the memory cost is a concern.
145
For Example-2 above, let's break down the provided the code
example in relation to the principles and requirements of
the Flyweight design pattern:
1. Shared State (Intrinsic State): The TreeType class in the code represents the Flyweight. The
attributes name , texture , and color are the intrinsic states. These attributes are shared among
all trees of the same type, texture, and color.
class TreeType:
def __init__(self, name, texture, color):
self.name = name # Intrinsic state
self.texture = texture # Intrinsic state
self.color = color # Intrinsic state
2. Non-shared State (Extrinsic State): The render method of the TreeType class takes x , y ,
size , and age as parameters. These represent the extrinsic states. They are unique to each tree
instance and are not stored within the Flyweight object but are instead passed by the client when
needed.
3. Flyweight Factory: The TreeFactory class acts as the Flyweight Factory. Its role is to manage
the creation and retrieval of TreeType objects. When a client requests a tree type, the factory first
checks if it already has that specific combination of name, texture, and color. If it does, it returns
the existing object; otherwise, it creates a new one. This ensures that for each unique
combination, only one object is ever created and maintained.
class TreeFactory:
_tree_types = dict()
@classmethod
def get_tree_type(cls, name, texture, color):
if name not in cls._tree_types:
tree_type = TreeType(name, texture, color)
cls._tree_types[name] = tree_type
return cls._tree_types[name]
4. Object Sharing: In the client code, when we plant multiple trees of the same type (e.g., "Pine"
with "PineTexture" and "Green" color), the factory ensures that only one TreeType object is
created for that combination. This object is then shared for all trees of that type, adhering to the
principle of object sharing in the Flyweight pattern.
All the above trees use the same TreeType object for the combination of "Pine", "PineTexture",
and "Green".
146
5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (name,
texture, color) is kept within the Flyweight ( TreeType ), while the extrinsic state (x, y, size, age) is
kept outside and passed in when needed. This separation allows the intrinsic state to be shared
while still providing flexibility in how the object is used.
6. Memory Efficiency: By ensuring that only one object is created for each unique combination of
name, texture, and color, the design minimizes memory usage. This is especially beneficial if the
system has to manage a large forest with thousands or millions of trees.
# Flyweight class
class VehicleModel:
def __init__(self, make, model, color):
self.make = make # Intrinsic state
self.model = model # Intrinsic state
self.color = color # Intrinsic state
# Flyweight Factory
class VehicleFactory:
_vehicle_models = dict()
@classmethod
def get_vehicle_model(cls, make, model, color):
key = (make, model, color)
if key not in cls._vehicle_models:
vehicle_model = VehicleModel(make, model, color)
cls._vehicle_models[key] = vehicle_model
return cls._vehicle_models[key]
# Client class
class VehicleRegistry:
def __init__(self):
self.vehicles = []
def display_registry(self):
for vehicle_model, reg_number, owner in self.vehicles:
vehicle_model.display(reg_number, owner)
147
# Client code
registry = VehicleRegistry()
# Registering two vehicles of the same make, model, and color but different
owners and registration numbers
registry.register_vehicle("Toyota", "Corolla", "White", "XYZ-1234", "John Doe")
registry.register_vehicle("Toyota", "Corolla", "White", "ABC-5678", "Jane Smith")
registry.display_registry()
Explanation:
1. VehicleModel is the Flyweight class. The intrinsic states are the make, model, and color of
the vehicle. These attributes are shared among vehicles of the same make, model, and color.
2. VehicleFactory is the Flyweight Factory. It ensures that for each unique combination of
make, model, and color, only one VehicleModel object is created.
3. VehicleRegistry is the client class. It uses the VehicleFactory to get vehicle models and stores
them along with their unique attributes (registration number and owner).
4. In the client code, even though we register two vehicles of the same make, model, and color,
only one VehicleModel object is created for that combination. This ensures memory
efficiency.
This design allows us to efficiently manage a large vehicle registry while minimizing memory usage
by sharing the common attributes of vehicles.
class VehicleModel:
def __init__(self, make, model, color):
self.make = make # Intrinsic state
self.model = model # Intrinsic state
self.color = color # Intrinsic state
2. Non-shared State (Extrinsic State): The display method of the VehicleModel class takes
reg_number and owner as parameters. These represent the extrinsic states. They are unique to
each vehicle and are not stored within the Flyweight object but are instead passed by the client
when needed.
148
def display(self, reg_number, owner):
print(f"Vehicle Make: {self.make}, Model: {self.model}, Color: {self.color},
Registration Number: {reg_number}, Owner: {owner}")
3. Flyweight Factory: The VehicleFactory class acts as the Flyweight Factory. Its role is to
manage the creation and retrieval of VehicleModel objects. When a client requests a vehicle
model, the factory first checks if it already has that specific combination of make, model, and
color. If it does, it returns the existing object; otherwise, it creates a new one. This ensures that for
each unique combination, only one object is ever created and maintained.
class VehicleFactory:
_vehicle_models = dict()
@classmethod
def get_vehicle_model(cls, make, model, color):
key = (make, model, color)
if key not in cls._vehicle_models:
vehicle_model = VehicleModel(make, model, color)
cls._vehicle_models[key] = vehicle_model
return cls._vehicle_models[key]
4. Object Sharing: In the client code, even though we register two vehicles of the same make,
model, and color, the factory ensures that only one VehicleModel object is created for that
combination. This object is then shared for both registrations, adhering to the principle of object
sharing in the Flyweight pattern.
Both registrations use the same VehicleModel object for the make, model, and color
combination of "Toyota", "Corolla", and "White".
5. Separation of Intrinsic and Extrinsic State: The design ensures that the intrinsic state (make,
model, color) is kept within the Flyweight, while the extrinsic state (registration number, owner) is
kept outside and passed in when needed. This separation allows the intrinsic state to be shared
while still providing flexibility in how the object is used.
6. Memory Efficiency: By ensuring that only one object is created for each unique combination of
make, model, and color, the design minimizes memory usage. This is especially beneficial if the
system has to manage a large number of vehicles.
In conclusion, the provided code example adheres to the principles of the Flyweight design
pattern by ensuring shared use of objects with intrinsic states, managing object creation and
sharing through a factory, and efficiently handling extrinsic states outside the shared objects.
149
🐍🚀 Object pool Pattern in Python 🐍🚀
A pattern that builds on Singleton is called an Object pool pattern. In this pattern, instead of being
able to use only one single object, you can use an object from a pool of objects. The pool size is set
depending on the use cases. Object pool pattern is commonly seen in applications that have
multiple incoming requests and need to communicate with the database quickly(e.g., backend
apps, stream processing). Having a pool of db connections allow incoming requests to
communicate with the DB, without having to create a new connection(takes longer) or having to
wait for a singleton object to finish serving other requests. However, note that the connections
must be returned to their initial state after use and before returning to the pool.
📌 The Object Pool Pattern is a creational design pattern that allows objects to be reused rather
than created and destroyed on demand. This is particularly useful when the instantiation of an
object is more expensive in terms of resources or time.
📌 The "pool" in the Object Pool Design Pattern refers to a collection of pre-instantiated objects
that are ready to be used. The idea is to have these objects available so that they can be quickly
borrowed and returned, avoiding the overhead of creating and destroying them repeatedly.
📌 The primary advantage of this pattern is performance optimization. By reusing objects that
have already been created, you save the overhead of re-instantiating them. This is especially
beneficial in scenarios where the cost of initializing an instance is high, the rate of instantiation of
a class is high, the instances are only needed for short periods of time, and instances are only
needed for specific and deterministic times.
📌 The Object Pool Pattern is often used in real-world scenarios like: - Database connection pools:
Creating a new database connection every time one is needed can be time-consuming. Instead, a
pool of connections is maintained. When a connection is needed, one is taken from the pool, and
when it's done, it's returned to the pool. - Thread pools: Threads are expensive to start and stop. A
thread pool is a collection of worker threads that efficiently execute asynchronous callbacks on
behalf of the application. - Memory allocation: In some systems, it's more efficient to allocate a
chunk of memory at once and then divvy it up among many objects, rather than allocating
memory for each object individually.
📌 One thing to remember is that when an object is returned to the pool, it should be reset to its
initial state, so it's ready to be used again without any lingering state from its previous use.
150
Let's see an example WITH and then WITHOUT the "Object
pool Pattern in Python"
1. Code without the Object Pool Pattern
import time
class SimpleDatabaseConnection:
def __init__(self):
self.id = id(self)
print(f"Created new connection with id: {self.id}")
def main_without_pool():
connections = [SimpleDatabaseConnection() for _ in range(5)]
📌 When you run main_without_pool() , you'll notice that for every query, a new connection is
created. This is inefficient, especially if creating a connection is resource-intensive or time-
consuming.
class ConnectionPool:
def __init__(self, max_size=5):
self._connections = [SimpleDatabaseConnection() for _ in range(max_size)]
self._in_use = []
def get_connection(self):
if not self._connections:
print("All connections are in use. Waiting...")
time.sleep(2) # Simulating waiting time
return self.get_connection()
conn = self._connections.pop()
self._in_use.append(conn)
return conn
Now, let's modify our main function to use the connection pool:
151
def main_with_pool():
pool = ConnectionPool(max_size=5)
📌 If all connections are in use and another connection is requested, the system will wait (in our
case, 2 seconds) and then try again.
📌 After a connection is done being used, it's important to release it back to the pool using
release_connection . This ensures that connections are reused, avoiding the overhead of
creating a new connection every time.
📌 The Object Pool Pattern, as demonstrated, helps in efficiently managing and reusing objects
(like database connections) that are expensive to create. This is especially beneficial in scenarios
where the rate of object creation is high, and objects are only needed for short periods.
By implementing the Object Pool Pattern, we've optimized our system to handle multiple requests
efficiently, reusing existing resources instead of continuously creating new ones.
In the refactored code with the Object Pool Pattern, a set number of connections are pre-
initialized and stored in the pool. When a connection is needed, it's simply retrieved from the pool,
eliminating the need to establish a new connection every time. This drastically reduces the
resource initialization overhead.
📌 Resource Reusability: The original code lacked a mechanism to reuse existing connections.
Once a connection was used, it was discarded. This not only led to the aforementioned
initialization overhead but also to potential resource wastage.
With the Object Pool Pattern, after a connection is used, it's returned to the pool, making it
available for subsequent requests. This reusability ensures that the system doesn't waste
resources by continuously creating and discarding them.
📌 Resource Limiting: Without the Object Pool Pattern, there's no limit to the number of
connections that can be created. In scenarios with a high number of incoming requests, this could
lead to resource exhaustion, potentially crashing the system or degrading its performance.
152
In the refactored code, the pool has a max_size , which limits the number of active connections. If
all connections are in use, the system will wait for a connection to be released back to the pool.
This mechanism prevents resource exhaustion and ensures that the system remains stable under
high loads.
With the Object Pool Pattern, since connections are pre-initialized and reused, the system's
performance becomes more predictable. The time taken to retrieve a connection from the pool is
consistent, leading to more stable and predictable response times.
📌 Resource Cleanup and Maintenance: In the original code, since connections were discarded
after use, there was no mechanism to perform cleanup or maintenance on them.
In the refactored code, before a connection is returned to the pool, it can be reset to its initial
state, ensuring that any lingering data or states from previous operations don't interfere with
subsequent operations. This ensures the integrity and reliability of the connections in the pool.
In conclusion, the Object Pool Pattern in the refactored code addresses several critical issues
present in the original code. It optimizes resource usage, improves performance, and ensures the
system's stability and reliability, especially under high loads.
import queue
class DatabaseConnection:
def __init__(self):
# Simulate a costly database connection setup
print("Setting up DB Connection...")
class DatabaseConnectionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(DatabaseConnection())
def get_connection(self):
connection = self._available.get()
self._in_use.add(connection)
return connection
# Usage:
pool = DatabaseConnectionPool(3)
📌 What this code does: - We have a DatabaseConnection class that simulates setting up a
connection and executing a query.
When a connection is requested via get_connection , it's taken from the queue and added
to the set of in-use connections.
After the connection is done being used, it's returned to the pool via release_connection ,
where it's removed from the in-use set, reset (for demonstration purposes), and a new
connection is added to the available queue.
📌 The primary advantage of this approach is that we don't have to bear the cost of setting up a
new database connection every time we need one. Instead, we reuse the connections from the
pool, which can lead to significant performance improvements in real-world scenarios where
database operations are frequent.
In above code, implementing Object pool Design Pattern for DatabaseConnectionPool - Where is
the concept of Pool coming here?
In the provided code, the "pool" of objects (in this case, DatabaseConnection objects) is
represented by the combination of the self._available queue and the self._in_use set
within the DatabaseConnectionPool class. Together, these two data structures manage the entire
pool of database connections.
Here's a breakdown:
154
1. The Queue ( self._available ): This queue holds the DatabaseConnection objects that are
currently not in use and are available to be borrowed. When the DatabaseConnectionPool is
initialized with a given size (e.g., 3), it pre-creates that many DatabaseConnection objects
and places them into this queue.
2. The Set ( self._in_use ): This set holds the DatabaseConnection objects that have been
borrowed from the pool and are currently being used. When a connection is "gotten" from
the pool using the get_connection method, it's removed from the self._available queue
and added to this self._in_use set.
The fundamental principle of the Object Pool Design Pattern is indeed being followed here:
Borrowing an Object: When you want a database connection, you call get_connection .
This method takes a DatabaseConnection object from the self._available queue (if one
is available) and adds it to the self._in_use set, indicating that the connection is now in
use.
Returning an Object: After you're done using a database connection, you call
release_connection to return it to the pool. The method removes the connection from the
self._in_use set, resets it (in this case, by setting it to None for demonstration purposes),
and then creates a new DatabaseConnection object to add back to the self._available
queue.
However, there's a slight inefficiency in the release_connection method: instead of reusing the
existing connection, it's discarding it and creating a new one. A more efficient approach would be
to reset the state of the existing connection (if necessary) and then simply return it to the
self._available queue without creating a new one.
import queue
class DatabaseConnection:
def __init__(self):
# Simulate a costly database connection setup
print("Setting up DB Connection...")
def reset(self):
# If there are any states or configurations to reset in the connection,
do it here.
pass
class DatabaseConnectionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
155
self._in_use = set()
for _ in range(size):
self._available.put(DatabaseConnection())
def get_connection(self):
connection = self._available.get()
self._in_use.add(connection)
return connection
# Usage:
pool = DatabaseConnectionPool(3)
In this improved version: - We added a reset method to the DatabaseConnection class. This
method can be used to reset any state or configurations of the connection if needed. - In the
release_connection method of the DatabaseConnectionPool class, we now call the reset
method on the connection to ensure it's in a clean state. Then, we directly return the existing
connection to the self._available queue without creating a new one. This makes the object
pool more efficient by reusing the existing objects.
In the context of the Object Pool Pattern, when an object is returned to the pool after use, it's
crucial to ensure that any state or configurations associated with that object are reset to their
default or initial values. This ensures that the next time the object is borrowed from the pool, it
doesn't carry over any residual state from its previous use.
For the DatabaseConnection class, the reset method is a placeholder to demonstrate this
concept. In a real-world scenario, a database connection might have various states or
configurations, such as:
1. Transactions: If a transaction was started but not committed or rolled back, it would need to
be terminated to ensure the connection doesn't remain in a transactional state.
2. Session Variables: Some databases allow setting session-specific variables. These would
need to be reset to their defaults.
3. Cursors: If the connection opened any cursors, they might need to be closed.
4. Timeouts or Locks: Any locks or timeouts set by the connection should be cleared.
156
The reset method would handle all these tasks. In our simplified example, the method is empty
because we're not simulating these complexities. However, in a real-world application, the reset
method would contain the necessary logic to bring the connection back to a clean, default state.
Here's a hypothetical example of what the reset method might look like for a more complex
DatabaseConnection :
def reset(self):
# End any open transactions
if self.transaction_open:
self.rollback_transaction()
This is just a conceptual representation. The actual implementation would depend on the specifics
of the database and the library being used.
import queue
class RenderedObject:
def __init__(self, object_type):
self.object_type = object_type
self.position = (0, 0)
print(f"Rendering a new {self.object_type}...")
def render(self):
print(f"Rendering {self.object_type} at position {self.position}")
class RenderedObjectPool:
def __init__(self, object_type, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
157
self._available.put(RenderedObject(object_type))
def get_object(self):
rendered_object = self._available.get()
self._in_use.add(rendered_object)
return rendered_object
# Usage:
vehicle1 = vehicle_pool.get_object()
vehicle1.set_position(50, 60)
vehicle1.render()
📌 What this code does: - We have a RenderedObject class that represents an entity in our game
or simulation. It can be positioned and rendered. - The RenderedObjectPool class manages a
pool of RenderedObject objects. It uses a queue to keep track of available objects and a set to
keep track of objects currently in use. - When an object is requested via get_object , it's taken
from the queue and added to the set of in-use objects. - After the object is done being used (e.g.,
after it's rendered), it's returned to the pool via release_object . Here, we reset its position to a
default value, remove it from the in-use set, and add it back to the available queue. - In the usage
example, we create pools for characters and vehicles. We then simulate getting a character and a
vehicle, positioning them, rendering them, and then returning them to their respective pools.
📌 This approach is beneficial in gaming or simulation scenarios where there are frequent render
operations. By reusing the rendered objects, we can avoid the overhead of creating and
destroying them repeatedly, leading to smoother rendering and better performance.
import queue
import requests
class WebSession:
def __init__(self):
self.session = requests.Session()
print("Establishing a new web session...")
class WebSessionPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(WebSession())
def get_session(self):
web_session = self._available.get()
self._in_use.add(web_session)
return web_session
# Usage:
pool = WebSessionPool(3)
session2 = pool.get_session()
data2 = session2.fetch("https://example.com/page2")
print("Fetched data from page2")
📌 What this code does: - We have a WebSession class that wraps around the
requests.Session() . This allows us to maintain a persistent connection to a website and reuse
the connection for multiple requests. - The WebSessionPool class manages a pool of WebSession
objects. It uses a queue to keep track of available sessions and a set to keep track of sessions
currently in use. - When a session is requested via get_session , it's taken from the queue and
added to the set of in-use sessions. - After the session is done being used (e.g., after fetching
data), it's returned to the pool via release_session . In this case, we don't need to reset any state;
159
we simply release the session. - In the usage example, we create a pool of 3 sessions. We then
simulate fetching data from two different URLs using two different sessions from the pool. After
fetching, we return the sessions to the pool.
📌 This approach is beneficial in web scraping scenarios where there are frequent fetch
operations. By reusing the web sessions, we can avoid the overhead of establishing a new
connection for each request, leading to faster data retrieval and reduced resource usage.
import queue
import time
class VideoProcessor:
def __init__(self):
print("Initializing a new video processor...")
class VideoProcessorPool:
def __init__(self, size):
self._available = queue.Queue(maxsize=size)
self._in_use = set()
for _ in range(size):
self._available.put(VideoProcessor())
def get_processor(self):
processor = self._available.get()
self._in_use.add(processor)
return processor
# Usage:
pool = VideoProcessorPool(3)
📌 What this code does: - We have a VideoProcessor class that simulates the processing of a
video. It takes in video data and an operation (like applying a filter) and returns the processed
result. - The VideoProcessorPool class manages a pool of VideoProcessor objects. It uses a
queue to keep track of available processors and a set to keep track of processors currently in use.
- When a processor is requested via get_processor , it's taken from the queue and added to the
set of in-use processors. - After the processor is done being used (e.g., after processing a video),
it's returned to the pool via release_processor . - In the usage example, we create a pool of 3
processors. We then simulate processing two different videos using two different processors from
the pool. After processing, we return the processors to the pool.
📌 This approach is beneficial in video processing scenarios where there are frequent processing
tasks. By reusing the video processors, we can avoid the overhead of initializing a new processor
for each task, leading to faster processing times and optimized resource usage.
And also I mentioned earlier, that for Object Pool Design pattern the fundamental principle is that
- In this pattern, instead of being able to use only one single object, you can use an object from a
pool of objects.
The "pool" in the Object Pool Design Pattern refers to a collection of pre-instantiated objects that
are ready to be used. The idea is to have these objects available so that they can be quickly
borrowed and returned, avoiding the overhead of creating and destroying them repeatedly.
In the VideoProcessorPool example, the pool is represented by two primary data structures: the
queue ( self._available ) and the set ( self._in_use ).
161
2. The Set ( self._in_use ): This is a collection (specifically a Python set ) of VideoProcessor
objects that are currently being used. When you "get" a processor from the available queue,
it's added to this set to indicate that it's in use. When you're done with the processor and
"release" it back to the pool, it's removed from this set and added back to the available
queue.
Together, these two collections form the "pool" of objects in the Object Pool Design Pattern. The
queue represents the part of the pool with objects ready to be used, and the set represents the
part of the pool with objects currently in use.
In the context of the Object Pool Design Pattern: - Borrowing an Object: This is done by getting
an object from the self._available queue and adding it to the self._in_use set. - Returning
an Object: This is done by removing the object from the self._in_use set and putting it back
into the self._available queue.
The fundamental principle you mentioned is indeed followed in this design. The pool consists of
multiple VideoProcessor objects, and you can borrow and return any of these objects to and
from the pool.
162
📌 The observer pattern is a behavioral design pattern that establishes a one-to-many
dependency between objects. When one object (the subject) changes state, all its dependents
(observers) are notified and updated automatically. This pattern is particularly useful when you
want to decouple the core functionality of your code from the parts that react to changes.
A similar implementation of this design pattern is seen in generating feeds on your social
platforms - the Pub/Sub (Publisher/Subscriber) Model/Pattern. When a content publisher
publishes their posts, the subscribers get notified of the content.
The events are passed in a synchronous manner from the Subjects to the Observers. But in
Pub/Sub patterns, the events are passed asynchronously.
In the Observer pattern, both the Subjects and Observers reside on the same application
locality whereas they can reside on different localities in the Pub/Sub pattern.
A typical place to use the observer pattern is between your application and presentation layers.
Your application is the manager of the data and is the single source of truth, and when the data
changes, it can update all of the subscribers, that could be part of multiple presentation layers. For
example, the score was changed in a televised cricket game, so all the web browser clients, mobile
phone applications, leaderboard display on the ground and television graphics overlay, can all
now have the updated information synchronized.
163
📌 Use Cases: - GUI elements: When a button is clicked (subject), several actions might need to be
triggered in the application (observers). - Stock market: When the price of a stock changes
(subject), multiple investors or tools might need to be informed (observers). - Sensor systems:
When a sensor detects a change (subject), multiple systems or alarms might need to be triggered
(observers).
📌 The Python interpreter itself doesn't specifically implement the observer pattern, but it
provides all the necessary tools to do so. The dynamic nature of Python, with its first-class
functions and ability to add and remove attributes from objects at runtime, makes it particularly
well-suited for implementing patterns like this.
class Blog:
def __init__(self):
self.articles = []
self.followers = []
class Follower:
def __init__(self, name):
self.name = name
📌 The above code directly couples the Blog and Follower classes. If we want to change the
way followers are notified, we'd have to modify the Blog class.
📌 This approach violates the Single Responsibility Principle. The Blog class is responsible for
both managing articles and notifying followers.
164
Refactored Code With Observer Design Pattern
To implement the Observer pattern, we'll introduce a Subject interface for the Blog and an
Observer interface for the Follower . This decouples the classes and allows for more flexibility:
@abstractmethod
def detach(self, observer):
pass
@abstractmethod
def notify(self):
pass
class Blog(Subject):
def __init__(self):
self.articles = []
self.observers = []
def notify(self):
for observer in self.observers:
observer.update(self.articles[-1])
class Follower(Observer):
def __init__(self, name):
self.name = name
165
📌 We've introduced the Subject and Observer interfaces. This allows us to easily add new
types of subjects or observers without modifying existing code.
📌 The Blog class now only notifies observers. It doesn't need to know how they handle the
updates. This decouples the classes and adheres to the Single Responsibility Principle.
Original Issue: In the initial code, the Blog class was directly responsible for updating the
followers. This means that any change in the way followers are notified or any addition of new
types of followers would require modifications to the Blog class.
Solution with Observer Pattern: By introducing the Subject and Observer interfaces, we've
separated the responsibilities. The Blog class (which implements the Subject interface) is only
responsible for maintaining a list of observers and notifying them. It doesn't need to know how
these observers handle the updates. The Follower class (which implements the Observer
interface) is solely responsible for how to react when notified.
Solution with Observer Pattern: With the Observer pattern in place, we can easily introduce new
types of observers (like EmailSubscriber or SMSNotifier ). Each of these new observers would
implement the Observer interface and define their own update method. The Blog class
remains unchanged, as it simply calls the update method on all attached observers.
Solution with Observer Pattern: The refactored code ensures that the Blog class is only responsible
for managing articles and maintaining a list of observers. The responsibility of reacting to new
articles is now with the Observer (e.g., Follower ). This separation of concerns means that
changes to how articles are managed won't affect the notification mechanism and vice versa.
📌 Ease of Extensibility
166
Original Issue: In the initial setup, adding new features or types of followers would require changes
to the core Blog class, making the system less maintainable and more prone to errors.
Solution with Observer Pattern: With the decoupling achieved through the Observer pattern, adding
new features becomes easier. For instance, if we want to introduce a feature where followers can
choose to be notified about specific categories of articles, we can do so by modifying only the
Observer classes without touching the Blog class.
In summary, the Observer Design Pattern in the refactored code provides a robust solution to the
issues present in the original code by ensuring decoupling, flexibility, adherence to design
principles, and ease of extensibility.
# the subject
class WeatherStation:
def __init__(self):
self._observers = []
self._temperature = 0
def notify_observers(self):
for observer in self._observers:
observer.update(self._temperature)
# represent observers
class CurrentConditionsDisplay:
def update(self, temperature):
print(f"Current conditions: {temperature} degrees Celsius")
# represent observers
class StatisticsDisplay:
def __init__(self):
self._max_temp = float('-inf')
self._min_temp = float('inf')
Usage:
station = WeatherStation()
current_display = CurrentConditionsDisplay()
statistics_display = StatisticsDisplay()
forecast_display = ForecastDisplay()
station.register_observer(current_display)
station.register_observer(statistics_display)
station.register_observer(forecast_display)
station.set_temperature(25)
station.set_temperature(18)
In the usage example, we create a weather station and three display elements. We register
the displays as observers to the weather station. When we set a new temperature on the
weather station, all registered displays get updated automatically.
📌 The observer pattern, as shown, allows you to add new types of display elements in the future
without modifying the WeatherStation class. This decoupling is the core advantage of the
pattern. If you wanted to add a new type of display, you'd simply create a new class that
implements the update method and register an instance of it with the weather station.
# the subject
168
class WeatherStation:
def __init__(self):
self._observers = []
self._temperature = 0
def notify_observers(self):
for observer in self._observers:
observer.update(self._temperature)
📌 Decoupling the Subject and Observers: The observer pattern emphasizes decoupling. In the
code, the WeatherStation doesn't need to know the specifics of what each observer does. It only
knows that they have an update method. This is a clear separation of concerns. The observers
can change their internal implementation without affecting the WeatherStation , and vice versa.
📌 Notification of State Changes: When the state of the subject changes (in this case, when the
temperature of the WeatherStation is set), all its observers are notified. This is achieved through
the notify_observers method in the WeatherStation class, which is called inside the
set_temperature method. Each observer's update method is then called with the new
temperature.
📌 Observers Define Their Reactions: Each observer decides how to react when notified of a
change. This is evident in the different implementations of the update method. The
CurrentConditionsDisplay simply prints the current temperature, the StatisticsDisplay
calculates and displays min/max temperatures, and the ForecastDisplay provides a rudimentary
forecast based on the temperature. This flexibility is a hallmark of the observer pattern, allowing
each observer to define its behavior upon receiving an update.
class CurrentConditionsDisplay:
def update(self, temperature):
print(f"Current conditions: {temperature} degrees Celsius")
class StatisticsDisplay:
def __init__(self):
self._max_temp = float('-inf')
self._min_temp = float('inf')
class ForecastDisplay:
def update(self, temperature):
# Just a dummy forecast based on current temperature
forecast = "sunny" if temperature > 20 else "rainy"
print(f"Forecast: {forecast}")
📌 Consistent Interface for Observers: All observers implement a consistent interface, which in
this case is the update method. This ensures that the WeatherStation can notify any observer
without knowing its specific type or implementation details. This is why we can easily add new
types of displays or observers in the future, as long as they implement the update method.
In summary, the provided code adheres to the principles and requirements of the observer
pattern by establishing a one-to-many dependency, decoupling the subject from its observers,
allowing dynamic addition/removal of observers, notifying observers of state changes, letting
observers define their reactions, and maintaining a consistent interface for all observers.
In a real-time auction system, when a bid is placed on an item, multiple entities might be
interested:
class Auction:
def __init__(self, item_name):
self._observers = []
self._highest_bid = 0
self._highest_bidder = None
self._item_name = item_name
def notify_observers(self):
for observer in self._observers:
observer.update(self._highest_bid, self._highest_bidder,
self._item_name)
170
self._highest_bid = bid_amount
self._highest_bidder = bidder_name
self.notify_observers()
class Bidder:
def __init__(self, name):
self._name = name
class Auctioneer:
def update(self, highest_bid, highest_bidder, item_name):
print(f"New highest bid for {item_name}! It's {highest_bid} by
{highest_bidder}.")
class DisplayBoard:
def update(self, highest_bid, highest_bidder, item_name):
print(f"--- Display Board ---\nCurrent highest bid for {item_name}:
{highest_bid}\nBidder: {highest_bidder}\n----------------------")
Usage:
alice = Bidder("Alice")
bob = Bidder("Bob")
charlie = Bidder("Charlie")
auctioneer = Auctioneer()
display_board = DisplayBoard()
auction.register_observer(alice)
auction.register_observer(bob)
auction.register_observer(charlie)
auction.register_observer(auctioneer)
auction.register_observer(display_board)
auction.place_bid(1000, "Alice")
auction.place_bid(1200, "Bob")
auction.place_bid(1100, "Charlie") # This won't update observers since it's not
the highest bid
auction.place_bid(1300, "Alice")
📌 Explanation:
The Auction class represents the subject. Whenever a new highest bid is placed, it notifies
all observers.
171
Bidder , Auctioneer , and DisplayBoard are observers. They each have their own update
method to react to changes in the auction.
In the usage example, we create an auction for a "Rare Painting". We then register three
bidders, an auctioneer, and a display board as observers. As bids are placed, the observers
are notified and react accordingly. If a bid isn't the highest, the observers aren't notified.
Let's see how above code, how the above code example you
gave adheres to the principles and requirements of the
observer pattern desing in Python
📌 One-to-Many Dependency: The Auction class (the subject) has a one-to-many relationship
with its observers ( Bidder , Auctioneer , DisplayBoard ). This relationship is maintained in the
_observers list within the Auction class. When a new bid is placed, multiple entities (observers)
are informed about the change.
class Auction:
def __init__(self, item_name):
self._observers = []
self._highest_bid = 0
self._highest_bidder = None
self._item_name = item_name
def notify_observers(self):
for observer in self._observers:
observer.update(self._highest_bid, self._highest_bidder,
self._item_name)
📌 Decoupling the Subject and Observers: The Auction class is decoupled from its observers.
It doesn't need to know the specifics of each observer's behavior. It only knows that they have an
update method. This separation ensures that the Auction class can function independently of
the specific observers attached to it. For instance, the Auction class doesn't need to know how
the Bidder class decides to inform its user, or how the DisplayBoard presents the information.
172
📌 Notification of State Changes: When a new highest bid is placed on an item in the Auction ,
all its observers are notified. This is done through the notify_observers method, which is called
within the place_bid method. Each observer's update method is then invoked with the details of
the highest bid.
📌 Observers Define Their Reactions: Each observer has its own unique reaction to the state
change in the Auction . For example: - The Bidder checks if they are still the highest bidder and
informs them accordingly. - The Auctioneer announces the new highest bid. - The DisplayBoard
updates its display to show the current highest bid and bidder.
class Bidder:
def __init__(self, name):
self._name = name
class Auctioneer:
def update(self, highest_bid, highest_bidder, item_name):
print(f"New highest bid for {item_name}! It's {highest_bid} by
{highest_bidder}.")
class DisplayBoard:
def update(self, highest_bid, highest_bidder, item_name):
print(f"--- Display Board ---\nCurrent highest bid for {item_name}:
{highest_bid}\nBidder: {highest_bidder}\n----------------------")
This flexibility in reaction is a core feature of the observer pattern, allowing each observer to
define its behavior upon receiving an update.
📌 Consistent Interface for Observers: All observers implement a consistent interface, the
update method. This ensures that the Auction can notify any observer without knowing its
specific type. Whether it's a Bidder , an Auctioneer , or a DisplayBoard , the Auction class can
communicate with them using the same method.
In conclusion, the auction system code adheres to the principles and requirements of the
observer pattern by establishing a one-to-many dependency, decoupling the subject from its
observers, allowing for dynamic management of observers, notifying observers of state changes,
letting observers define their reactions, and maintaining a consistent interface for all observers.
173
🐍🚀 Adapter Design Pattern. 🐍🚀
A structural design pattern in Python proposes a way of composing objects to create new
functionality. One of these patterns I will cover in this section is adapter pattern.
The adapter pattern is one of THE MOST pervasive structural design pattern that helps us make
two incompatible interfaces compatible. What does that really mean? If we have an old
component and we want to use it in a new system, or a new component that we want to use in an
old system, the two can rarely communicate without requiring code changes. But changing the
code is not always possible, either because we don't have access to it or because it is impractical.
In such cases, we can write an extra layer that makes all the required modifications for enabling
communication between the two interfaces. This layer is called an adapter.
📌 Use Cases: 1. Legacy Code Integration: When integrating legacy systems with newer systems,
the old code might not fit the new system's expectations. Instead of rewriting the legacy code, an
adapter can be used to bridge the gap. 2. Third-party Libraries: Sometimes, you might want to
integrate a third-party library into your application. If the library's interface doesn't match your
application's expectations, an adapter can help. 3. API Changes: If an API you rely on changes its
interface, instead of changing every place in your code where you use this API, you can write an
adapter to adapt the new API to the old one.
174
Code without Adapter Pattern:
class OldPrinter:
def print_simple_text(self, text):
print(text)
class NewSystem:
def __init__(self, printer):
self.printer = printer
# Usage
printer = OldPrinter()
system = NewSystem(printer)
system.print_advanced_text("Hello, World!")
📌 The above code will throw an error because OldPrinter does not have a method called
print_simple_text that the NewSystem expects. This is the incompatibility issue we're facing.
class OldPrinter:
def print_text(self, text):
print(text)
class PrinterAdapter:
def __init__(self, old_printer):
self.old_printer = old_printer
class NewSystem:
def __init__(self, printer):
self.printer = printer
# Usage
old_printer = OldPrinter()
adapter = PrinterAdapter(old_printer)
system = NewSystem(adapter)
system.print_advanced_text("Hello, World!")
175
📌 In the refactored code, we introduced the PrinterAdapter class. This class takes an instance
of OldPrinter and adapts its interface to match what NewSystem expects.
📌 By using the Adapter pattern, we've ensured that the two incompatible interfaces ( OldPrinter
and NewSystem ) can work together. This promotes code reusability and keeps the system
modular.
📌 Adapter Pattern Solution: The Adapter pattern's primary goal is to bridge the gap between
two incompatible interfaces. In our scenario, the incompatibility is between the NewSystem and
the OldPrinter due to the method name mismatch.
class PrinterAdapter:
def __init__(self, old_printer):
self.old_printer = old_printer
📌 Usage with NewSystem: When we want to use OldPrinter with NewSystem , we don't pass
the OldPrinter instance directly. Instead, we wrap it inside the PrinterAdapter and pass the
adapter instance to NewSystem .
old_printer = OldPrinter()
adapter = PrinterAdapter(old_printer)
system = NewSystem(adapter)
176
📌 Benefits: 1. Code Reusability: We didn't have to modify the original OldPrinter class. This
means if there are other systems or parts of the codebase that rely on OldPrinter , they remain
unaffected. 2. Modularity: By introducing the adapter, we've kept the concerns separated. The
OldPrinter remains focused on its printing logic, the NewSystem on its advanced printing, and
the PrinterAdapter on bridging the gap between the two. 3. Flexibility: In the future, if there are
more printers with different interfaces, we can simply create new adapters for them without
changing the existing system.
📌 Conclusion: The Adapter pattern allowed us to integrate the OldPrinter with the NewSystem
seamlessly. It provided a solution that is modular, reusable, and flexible, addressing the primary
issue of method name mismatch without altering the original components.
Let's consider a scenario where you have an old system that uses a Book class to display book
details. Now, you have a new system that uses an EBook class, but you want to use the old
system's display method without changing its code.
# Old System
class Book:
def __init__(self, title, author):
self.title = title
self.author = author
def display(self):
return f"Title: {self.title}, Author: {self.author}"
# New System
class EBook:
def __init__(self, title, author, format):
self.title = title
self.author = author
self.format = format
def get_details(self):
return self.title, self.author, self.format
# Adapter
class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook
def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"
# Usage
ebook = EBook("Digital Fortress", "Dan Brown", "PDF")
ebook_adapter = EBookAdapter(ebook)
print(ebook_adapter.display()) # Output: Title: Digital Fortress, Author: Dan
Brown
177
1. The Book class is from the old system. It has a method display which returns the title and
author of the book.
2. The EBook class is from the new system. It has an additional attribute format and a method
get_details which returns the title, author, and format of the ebook.
3. The EBookAdapter class is our adapter. It takes an instance of EBook and provides a
display method similar to the Book class. Inside this method, it calls the get_details
method of EBook and formats the data to match the old system's expectations.
4. In the usage section, we create an instance of EBook , wrap it with EBookAdapter , and then
call the display method. This allows us to use the new EBook class with the old system's
display functionality.
📌 Under-the-hood:
When we talk about the Adapter Pattern, it's essential to understand that it's not about changing
the behavior of objects but about providing a consistent interface. The adapter acts as a wrapper
that translates method calls. In our example, the EBookAdapter doesn't change the behavior of
the EBook class. Instead, it provides a method ( display ) that the old system expects, and
internally, it calls the appropriate method ( get_details ) of the EBook class.
In terms of performance, the overhead introduced by the adapter is generally negligible. However,
the primary benefit is the maintainability and scalability of the codebase. By using adapters, you
can easily integrate new components or replace old ones without affecting the rest of the system.
class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook
The __init__ method of the EBookAdapter class accepts an argument named ebook . This
argument is expected to be an instance of the EBook class. We can infer this based on the
subsequent usage of the ebook object within the adapter:
def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"
Here, the display method of the EBookAdapter class calls the get_details method on the
ebook object. Since the get_details method is a method of the EBook class, it implies that the
ebook object is expected to be an instance of the EBook class.
📌 To make the relationship more explicit and to ensure type safety, one could have used
Python's type hints:
178
class EBookAdapter:
def __init__(self, ebook: EBook):
self.ebook = ebook
With this type hint, it's clear that the ebook parameter should be an instance of the EBook class.
# Old System
class Book:
def __init__(self, title, author):
self.title = title
self.author = author
def display(self):
return f"Title: {self.title}, Author: {self.author}"
# New System
class EBook:
def __init__(self, title, author, format):
self.title = title
self.author = author
self.format = format
def get_details(self):
return self.title, self.author, self.format
# Adapter
class EBookAdapter:
def __init__(self, ebook):
self.ebook = ebook
def display(self):
title, author, _ = self.ebook.get_details()
return f"Title: {title}, Author: {author}"
# Usage
ebook = EBook("Digital Fortress", "Dan Brown", "PDF")
ebook_adapter = EBookAdapter(ebook)
print(ebook_adapter.display()) # Output: Title: Digital Fortress, Author: Dan
Brown
179
📌 Differing Interfaces: 1. The Book class (old system) has a method display that returns a
string representation of the book's title and author. 2. The EBook class (new system) does not
have a display method. Instead, it has a get_details method that returns a tuple containing
the title, author, and format of the ebook.
📌 And that's whole reason of having the Adapter Class: The primary role of the
EBookAdapter is to make the EBook class compatible with the old system's expectations. The old
system expects a display method that returns a string representation of the book's details.
📌 Using get_details : 1. To achieve this compatibility, the EBookAdapter class introduces its
own display method. 2. Inside this display method, it needs to fetch the title and author of the
EBook instance to format them in the desired string representation. 3. The get_details method
of the EBook class provides this information. By calling self.ebook.get_details() , the adapter
fetches the title, author, and format of the ebook. 4. The line title, author, _ =
self.ebook.get_details() unpacks the returned tuple. The underscore ( _ ) is a conventional
placeholder for values we don't need (in this case, the format).
class Club:
def __init__(self, name):
self.name = name
def __str__(self):
return f'the club {self.name}'
def organize_event(self):
return 'brings in artist to perform'
Now, let's say you bring in two interesting classes: Musician and Dancer from an external library,
and you want these new classes to work seamlessly with the existing Club classes.
def __str__(self):
return f'the musician {self.name}'
def play(self):
return 'plays music'
180
class Dancer:
def __init__(self, name):
self.name = name
def __str__(self):
return f'the dancer {self.name}'
def dance(self):
return 'does a dance performance'
But, here's the catch, you can not make a lot of changes in either your old Club Class or the new
Musician and Dancer classes from an external library.
The external Musician and Dancer classes have play() or dance() method
The client code, i.e. the Club Class, only has an organize_event() method. It has no idea about
play() or dance() (on the respective classes from the external library).
We create a generic Adapter class that allows us to adapt several objects with different interfaces
into one unified interface. The obj argument of the init() method is the object that we want to
adapt, and adapted_methods is a dictionary containing key/value pairs matching the method the
client calls and the method that should be called.
Take a good look and then I will go through each step of the below.
def __str__(self):
return f'the musician {self.name}'
def play(self):
return 'plays music'
class Dancer:
def __init__(self, name):
self.name = name
def __str__(self):
return f'the dancer {self.name}'
def dance(self):
return 'does a dance performance'
################################################
# Below is my old Club Class in my main file
# The whole purpose of this code is to create
181
# and Adapter class so that this old Club class
# can interact with new classes coming from
# external.py file
################################################
class Club:
def __init__(self, name):
self.name = name
def __str__(self):
return f'the club {self.name}'
def organize_event(self):
return 'brings in artist to perform'
class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)
def __str__(self):
return str(self.obj)
def main():
print(f'{obj} {obj.organize_event()}')
if __name__ == "__main__":
main()
the club Jazz Cafe hires an artist to perform for the people
the musician Roy Ayers plays music
the dancer Shane Sparks does a dance performance
182
Alright, let's dissect the provided code and understand its intricacies.
📌 The code begins by defining two classes, Musician and Dancer , both of which are assumed
to be part of an external library. Each of these classes has its own unique method ( play for
Musician and dance for Dancer ) to represent the action they perform.
📌 The Club class represents a venue that can organize events. Its primary method is
organize_event , which signifies hiring an artist for a performance.
📌 The challenge here is that while the Club class uses the method organize_event to signify a
performance, the external classes ( Musician and Dancer ) use different methods ( play and
dance respectively). This discrepancy in method naming is where the need for an adapter arises.
📌 The Adapter class is designed to bridge this gap. It takes in an object and a dictionary of
adapted methods. The magic happens in this line: self.__dict__.update(adapted_methods) .
This line dynamically updates the instance dictionary with the adapted methods, essentially
allowing us to "rename" or "alias" methods.
📌 Under-the-hood: The __dict__ attribute of a Python object is a dictionary that contains the
object's instance variables and values. By updating this dictionary directly, we can dynamically
add, modify, or alias attributes and methods of the object.
📌 In the main function, a list of objects ( objects ) is created, containing an instance of the Club ,
Musician , and Dancer classes. The goal is to loop through each object and call the
organize_event method.
📌 In the main() function, for the Musician and Dancer objects, the adapted_methods
dictionary is created to map the organize_event method to the respective play or dance
method:
📌 As we iterate over each object in the list, the code checks if the object has either a play or
dance method. If it does, the code prepares a dictionary ( adapted_methods ) that maps the
organize_event method to either the play or dance method of the object.
📌 Once the mapping dictionary is prepared, the object is wrapped (or adapted) using the
Adapter class. This effectively gives the object an organize_event method that points to its
inherent play or dance method.
📌 Finally, for each object (whether it's the original Club object or the adapted Musician and
Dancer objects), the organize_event method is called, and the result is printed.
📌 In essence, the Adapter pattern here allows the client code to interact with the Musician and
Dancer classes using the same interface ( organize_event ) as it does with the Club class, even
though the external classes have different method names. This ensures a consistent client
interface regardless of the underlying class implementations.
183
Let's delve deeper into the Adapter class and its utilization within the main() method to
understand how it's implementing the Adapter pattern.
In the main() function, for the Musician and Dancer objects, the adapted_methods dictionary
is created to map the organize_event method to the respective play or dance method:
class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)
def __str__(self):
return str(self.obj)
def main():
print(f'{obj} {obj.organize_event()}')
📌 Adapter Class Overview: The Adapter class is designed to take an object and a dictionary of
methods that need to be adapted. The primary goal of this class is to allow the object to be used
in a different context than it was originally intended for, without modifying the object's original
code.
class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)
def __str__(self):
return str(self.obj)
📌 Key Points: 1. The Adapter class constructor ( __init__ ) accepts two parameters: obj (the
object to be adapted) and adapted_methods (a dictionary of methods to adapt). 2. The line
self.__dict__.update(adapted_methods) is crucial. It dynamically updates the instance's
dictionary with the adapted methods. This allows the Adapter to "alias" or "rename" methods of
the obj .
184
📌 Under-the-hood: The __dict__ attribute is a dictionary representation of an object's
namespace. By updating this dictionary, you can dynamically add or modify attributes and
methods of the object.
📌 Adapter in Action within main(): Let's break down the main() method step-by-step to see
how the Adapter pattern is applied:
2. Depending on which method the object has ( play for Musician or dance for Dancer ), we
create a dictionary ( adapted_methods ) that maps the organize_event method to the
respective method of the object:
if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)
3. Now, we use the Adapter class to wrap the object, effectively adapting it:
At this point, the obj (whether it's a Musician or Dancer ) has been adapted to have an
organize_event method. This method will internally call the object's play or dance method,
respectively.
4. Finally, we call the organize_event method on the object (whether it's the original Club or
the adapted Musician and Dancer ):
print(f'{obj} {obj.organize_event()}')
📌 Summary: The Adapter pattern's essence here is to allow the client code (in this case, the
main() method) to interact with the Musician and Dancer classes using the same method
name ( organize_event ) as it does with the Club class. This is achieved without modifying the
original classes but by wrapping them with the Adapter class that provides the desired interface.
185
How exactly, self.__dict__.update(adapted_methods)
line dynamically updates the instance dictionary with the
adapted methods, essentially allowing us to "rename" or
"alias" methods.?
And how it facilitates the Adapter pattern in here.
class Adapter:
def __init__(self, obj, adapted_methods):
self.obj = obj
self.__dict__.update(adapted_methods)
def __str__(self):
return str(self.obj)
def main():
print(f'{obj} {obj.organize_event()}')
📌 Understanding __dict__ : Every instance of a Python class has a __dict__ attribute, which
is a dictionary containing the instance's attributes and their values. This dictionary is mutable,
which means you can add, modify, or delete attributes dynamically.
For example:
class Sample:
def __init__(self, x):
self.x = x
s = Sample(10)
print(s.__dict__) # Output: {'x': 10}
📌 Using update with __dict__ : The update method of dictionaries allows you to merge one
dictionary into another. When you use update on self.__dict__ , you're essentially adding (or
updating) attributes to the instance dynamically.
186
📌 In the Context of the Adapter Class: The Adapter class is designed to take an object ( obj )
and a dictionary of methods ( adapted_methods ) that need to be adapted. The goal is to allow the
object to have a method (or methods) that it didn't originally have or to "rename" its existing
methods.
📌 Breaking Down the main() Function: In the main() function, for the Musician and Dancer
objects, the adapted_methods dictionary is created to map the organize_event method to the
respective play or dance method:
if hasattr(obj, 'play'):
adapted_methods = dict(organize_event=obj.play)
elif hasattr(obj, 'dance'):
adapted_methods = dict(organize_event=obj.dance)
At this point, due to the self.__dict__.update(adapted_methods) line in the Adapter class, the
obj (whether it's a Musician or Dancer ) now has an organize_event method. This method will
internally call the object's play or dance method, respectively.
Now, let's dive into the adapter pattern in the context of the PyTorch codebase.
📌 One of the areas where PyTorch uses the adapter pattern is in its interaction with the
underlying tensor libraries like ATen . PyTorch's tensor operations might be built on top of ATen ,
but the interface that PyTorch provides to its users is different from the raw ATen interface. This
is especially true when considering backward compatibility or when introducing new features.
ATen is fundamentally a tensor library, on top of which almost all other Python and C++ interfaces
in PyTorch are built. It provides a core Tensor class, on which many hundreds of operations are
defined.
The ATen tensor library backing PyTorch is a simple tensor library thats exposes the Tensor
operations in Torch directly in C++14. ATen's API is auto-generated from the same declarations
PyTorch uses so the two APIs will track each other over time.
187
Tensor types are resolved dynamically, such that the API is generic and does not include
templates. That is, there is one Tensor type. It can hold a CPU or CUDA Tensor, and the tensor may
have Doubles, Float, Ints, etc. This design makes it easy to write generic code without templating
everything.
class PyTorchTensorAdapter:
def __init__(self, aten_tensor):
self.aten_tensor = aten_tensor
📌 The benefit of this approach is that PyTorch can maintain its own interface without being
tightly coupled to the underlying ATen library. If ATen changes its method names or signatures in
the future, only the adapter needs to be updated. This ensures that the main PyTorch codebase
remains unaffected by such changes.
📌 Now, let's discuss the underlying principle here. The adapter pattern is essentially about
abstraction. In software design, we often want to abstract away the details of one component
when interfacing with another. This is especially true in large systems or frameworks like PyTorch,
where multiple components (like tensors, autograd, optimizers, etc.) need to interact seamlessly.
By using adapters, we can ensure that each component maintains its own independent interface
while still being able to communicate with others. This not only makes the codebase more
modular but also easier to maintain and extend.
📌 Another area where the adapter pattern can be observed in PyTorch is in its integration with
other libraries, especially NumPy. PyTorch tensors and NumPy arrays share a lot of similarities,
but they are distinct entities with different underlying implementations. However, for ease of use
and to provide a seamless experience to users who are familiar with NumPy, PyTorch provides
utilities to convert between PyTorch tensors and NumPy arrays.
📌 Consider the methods numpy() and from_numpy() . The numpy() method converts a PyTorch
tensor to a NumPy array, while from_numpy() does the opposite. Here's a brief look at how this
might be implemented:
188
class TorchTensor:
...
def numpy(self):
# Convert the PyTorch tensor to a NumPy array
return numpy_adapter.to_numpy(self)
@staticmethod
def from_numpy(numpy_array):
# Convert a NumPy array to a PyTorch tensor
return numpy_adapter.from_numpy(numpy_array)
In this hypothetical snippet, numpy_adapter acts as an adapter between PyTorch tensors and
NumPy arrays. It abstracts away the details of the conversion, allowing PyTorch tensors to
maintain their own interface while still being able to interact with NumPy arrays.
📌 The underlying principle here is interoperability. In the world of data science and machine
learning, there are many libraries, each with its strengths. Users often switch between libraries
depending on the task at hand. By providing adapters that allow for easy conversion between
different data structures, libraries like PyTorch ensure that users don't get locked into a particular
ecosystem and can leverage the best tools for the job.
In essence, the adapter pattern is pervasive in complex systems like PyTorch. It ensures that
different components, whether they are internal modules or external libraries, can work together
harmoniously without getting entangled in each other's specific implementations. This design
principle is crucial for the scalability, maintainability, and extensibility of such systems.
1. Original System: This system expects any model passed to it to have a predict() method.
2. PyTorch Model: PyTorch models use the forward() method for inference.
import torch.nn as nn
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc = nn.Linear(10, 1)
class PyTorchModelAdapter:
def __init__(self, pytorch_model):
self.model = pytorch_model
4. Using the Adapter: Now, you can use the adapter to integrate the PyTorch model into the
original system.
📌 In this example, PyTorchModelAdapter acts as an adapter, bridging the gap between the
original system's expectations and the PyTorch model's native interface. The adapter wraps
around the PyTorch model and provides the predict() method expected by the system,
internally calling the model's forward() method.
📌 This approach ensures that neither the original system nor the PyTorch model needs to be
modified. The adapter pattern provides a layer of abstraction that allows the two to communicate
seamlessly.
190
🐍🚀 The factory method in Python is based on a
single function that's written to handle our object
creation task. 🐍🚀
We execute it, passing a parameter that provides information about what we want. As a result, the
object we wanted is created.
Interestingly, when we use the factory method, we don't need to know any details about how the
resulting object is implemented and where it is coming from.
The underlying idea here is that, when developing code, you may instantiate objects directly in
methods or in classes. While this is quite normal, you may want to add an extra abstraction
between the creation of the object and where it is used in your project.
You can use the Factory pattern to add that extra abstraction. Adding an extra abstraction will also
allow you to dynamically choose classes to instantiate based on some kind of logic.
Before the abstraction, your class or method would directly create a concrete class. After adding
the factory abstraction, the concrete class is now created outside of the current class/method, and
now in a subclass.
Imagine an application for designing houses and the house has a chair already added on the floor
by default. By adding the factory pattern, you could give the option to the user to choose different
chairs, and how many at runtime. Instead of the chair being hard coded into the project when it
started, the user now has the option to choose.
Adding this extra abstraction also means that the complications of instantiating extra objects can
now be hidden from the class or method that is using it.
This separation also makes your code easier to read and document.
Product Interface: The interface describing the attributes and methods that the Factory will
require in order to create the final product/object.
Creator: The Factory class. Declares the Factory method that will return the object requested
from it.
Concrete Product: The object returned from the Factory. The object implements the Product
interface.
191
Let's see an example WITH and then WITHOUT the "Factory
design pattern in Python"
class PDFDocument:
def print(self):
print("Printing PDF document...")
class WordDocument:
def print(self):
print("Printing Word document...")
class ExcelDocument:
def print(self):
print("Printing Excel document...")
def print_document(document_type):
if document_type == "pdf":
doc = PDFDocument()
elif document_type == "word":
doc = WordDocument()
elif document_type == "excel":
doc = ExcelDocument()
else:
raise ValueError("Unknown document type")
doc.print()
📌 The creation logic of documents is mixed with the printing logic in the print_document
function. This makes the function less cohesive and harder to maintain.
# Product Interface
class Document(ABC):
@abstractmethod
def print(self):
pass
# Concrete Products
192
class PDFDocument(Document):
def print(self):
print("Printing PDF document...")
class WordDocument(Document):
def print(self):
print("Printing Word document...")
class ExcelDocument(Document):
def print(self):
print("Printing Excel document...")
# Creator
class DocumentFactory:
@staticmethod
def create_document(document_type):
if document_type == "pdf":
return PDFDocument()
elif document_type == "word":
return WordDocument()
elif document_type == "excel":
return ExcelDocument()
else:
raise ValueError("Unknown document type")
# Concrete Creator
def print_document(document_type):
doc = DocumentFactory.create_document(document_type)
doc.print()
📌 If we want to add a new document type, we only need to modify the DocumentFactory class.
This makes our code more maintainable and adheres to the Open/Closed Principle.
📌 The Document class (Product Interface) ensures that all document types (Concrete Products)
have a print method. This provides a consistent interface for the client code (Concrete Creator).
📌 The DocumentFactory class (Creator) abstracts away the creation logic, allowing the client
code to remain unchanged even if the underlying creation logic changes.
In conclusion, by using the Factory design pattern, we've made our code more modular,
maintainable, and extensible.
193
Let's delve deeper into how the refactored code with the
Factory design pattern addresses the issues of the original
code.
Original Issues:
1. Modification Required for New Document Types: In the original code, if we wanted to
introduce a new document type, we had to modify the print_document function. This is
problematic because it violates the Open/Closed Principle.
2. Mixed Responsibilities: The print_document function in the original code was responsible
for both creating the document object and printing it. This mixing of responsibilities makes
the function less cohesive and harder to maintain.
📌 Adherence to the Open/Closed Principle: With the Factory pattern in place, if we want to
introduce a new document type, we only need to make changes to the DocumentFactory class.
The print_document function remains untouched. This means our system is now more
extensible, as it's open to extension (adding new document types) but closed for modification (no
need to modify existing functions).
📌 Consistent Interface for Document Types: The introduction of the Document class (Product
Interface) ensures that all document types (Concrete Products) implement the print method.
This provides a consistent interface for the client code, ensuring that any document type returned
by the factory can be printed without issues. This reduces the risk of runtime errors and makes
the system more robust.
📌 Abstraction of Creation Logic: The DocumentFactory class abstracts away the creation logic
of document objects. This abstraction means that the client code (in this case, the
print_document function) doesn't need to know the specifics of how each document type is
instantiated. This encapsulation of creation logic makes the system more flexible. For instance, if
the instantiation process for a particular document type changes in the future, we only need to
update the DocumentFactory class without affecting the client code.
📌 Centralized Creation Logic: By centralizing the creation logic within the DocumentFactory
class, we ensure that there's a single point of truth for object creation. This centralized approach
reduces the risk of inconsistencies and errors in the system. If there's a change in how a
document type should be instantiated, we only need to update it in one place.
In summary, the Factory design pattern provides a structured way to handle object creation. By
abstracting and centralizing this process, the pattern ensures that our code remains modular,
maintainable, and extensible. The refactored code with the Factory pattern effectively addresses
the issues present in the original code, making it more robust and future-proof.
📌 The primary advantage of the factory method is abstraction. It abstracts the process of object
creation and allows the client code to be decoupled from the specific classes that are instantiated.
This means that if you want to change the object being created, you only need to modify the
factory method, not all the places in your code where the object is used.
📌 Use Cases: 1. When the exact type of the object isn't known until runtime. For instance, a GUI
library might have a button factory. Depending on the operating system, it might create a
WindowsButton, MacButton, or LinuxButton. 2. When the creation process is more complex than
just "newing" up an object. For example, if there's a need to pull from a pool of objects instead of
creating a new one (object pooling). 3. When you want to keep track of the number of objects
created, or when you want to limit the number of instances of a particular class.
class Animal:
def speak(self):
pass
class Dog(Animal):
def speak(self):
return "Woof!"
class Cat(Animal):
def speak(self):
return "Meow!"
class Fish(Animal):
def speak(self):
return "..."
def animal_factory(animal_type):
if animal_type == "Dog":
return Dog()
elif animal_type == "Cat":
return Cat()
elif animal_type == "Fish":
return Fish()
else:
raise ValueError(f"Unknown animal type: {animal_type}")
# Usage
animal = animal_factory("Dog")
print(animal.speak()) # Outputs: Woof!
The client code (the usage part) doesn't need to know about the specific animal classes. It just
calls the factory method and gets an animal object. This decouples the object creation from
the client code.
📌 Under the hood: - The factory method pattern leverages polymorphism. The client code
interacts with the base class ( Animal in our case), but the actual object returned is one of its
subclasses. - This pattern promotes the open/closed principle. If you want to add a new animal
type in the future, you can just add a new subclass and modify the factory method. The existing
client code doesn't need to change.
📌 In conclusion, the factory method pattern is a powerful tool for abstracting object creation. It
promotes code reusability, decoupling, and scalability. By understanding and leveraging this
pattern, you can write more maintainable and flexible code.
import abc
class Shape(metaclass=abc.ABCMeta):
@abc.abstractmethod
def calculate_area(self):
pass
@abc.abstractmethod
def calculate_perimeter(self):
pass
Now, WITHOUT a factory function we need to create several concrete, more specific shapes:
class Rectangle(Shape):
def __init__(self, height, width):
self.height = height
self.width = width
def calculate_area(self):
return self.height * self.width
def calculate_perimeter(self):
return 2 * (self.height + self.width)
class Square(Shape):
def __init__(self, width):
self.width = width
def calculate_area(self):
return self.width ** 2
196
def calculate_perimeter(self):
return 4 * self.width
class Circle(Shape):
def __init__(self, radius):
self.radius = radius
def calculate_area(self):
return 3.14 * self.radius * self.radius
def calculate_perimeter(self):
return 2 * 3.14 * self.radius
So far, we have created an abstract class and extended it to suit different shapes that will be
available in our library.
BUT issue now, in order to create the different shape objects, clients will have to know the names
and details of our shapes and separately perform the creation.
The Factory Method design pattern will help us abstract the available shapes from the client,
i.e. the client does not have to know all the shapes available, but rather only create what they
need during runtime. It will also allow us to centralize and encapsulate the object creation.
Let us achieve this by creating a ShapeFactory that will be used to create the specific shape classes
based on the client's input:
class ShapeFactory:
def create_shape(self, name):
if name == 'circle':
radius = input("Enter the radius of the circle: ")
return Circle(float(radius))
This is our interface for creation. We don't call the constructors of concrete classes, we call the
Factory and ask it to create a shape.
Our ShapeFactory works by receiving information about a shape such as a name and the required
dimensions. Our factory method create_shape() will then be used to create and return ready
objects of the desired shapes.
The client doesn't have to know anything about the object creation or specifics. Using the factory
object, they can create objects with minimal knowledge of how they work:
197
def shapes_client():
shape_factory = ShapeFactory()
shape_name = input("Enter the name of the shape: ")
shape = shape_factory.create_shape(shape_name)
The above example is a classic demonstration of the Factory Method pattern in action. Let's delve
into the details of how the Factory Method aids in this scenario:
📌 Abstraction of Object Creation: The Factory Method pattern abstracts the process of object
creation from the client. In the example, the client doesn't directly instantiate the Circle ,
Rectangle , or Square classes. Instead, the client interacts with the ShapeFactory to request a
shape. The factory then takes care of the creation details.
📌 Centralization of Object Creation: All the logic related to object creation is centralized in the
ShapeFactory . This means that if there's a change in how a shape is created or if a new shape is
added, only the factory needs to be updated. The client code remains unaffected. This
centralization promotes maintainability.
📌 Encapsulation: The Factory Method pattern encapsulates the creation logic. In the example,
the client doesn't need to know the constructors of the concrete shape classes or their specific
parameters. The factory encapsulates these details, asking the client only for the necessary
information through user input.
📌 Flexibility: The Factory Method pattern provides flexibility in terms of object creation. If, in the
future, a new shape like Triangle is introduced, the ShapeFactory can be easily extended to
support it without affecting existing client code.
📌 Consistent Interface: The client interacts with a consistent interface, i.e., the create_shape
method of the ShapeFactory . This method provides a unified way to create any shape. The client
doesn't need to remember different constructors or initialization parameters for different shapes.
📌 Decoupling: The Factory Method pattern decouples the client from the concrete classes. In the
example, the shapes_client function doesn't have any direct dependencies on Circle ,
Rectangle , or Square . It only depends on the abstract Shape class and the ShapeFactory . This
decoupling means that the concrete shape classes can be modified, replaced, or extended without
affecting the client code.
📌 Dynamic Runtime Creation: The Factory Method pattern allows for dynamic object creation
at runtime based on user input or other conditions. In the example, the shape to be created is
determined by the user's input during runtime. The factory then dynamically creates the
appropriate shape object.
Example - 3
📌 Revisiting the main concept of Factory Design Pattern which is to create a pattern that provides
an interface for creating objects in a super class, but allows subclasses to alter the type of objects
that will be created. In simpler terms, it's a way to create objects without specifying the exact class
of object that will be created. The main goal of the Factory Pattern is to decouple the creation of
198
objects from the client that needs them.
Consider that we're developing a ticketing system where we want our users to generate various
ticket types. As we can't predict the exact ticket types a user might want, we need a flexible
solution. The factory method provides us with a standardized interface for ticket creation. At this
initial stage, we're only supporting two ticket types: incident and problem. However, we plan to
introduce more types later. The beauty of the factory method is that it lets us add new specific
classes swiftly without altering the user's existing code.
class Ticket(ABC):
@abstractmethod
def ticket_type():
pass
class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class ProblemTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class ServiceRequest(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class TicketFactory:
@staticmethod
def create_ticket(t_type):
tickets = {
'incident' : IncidentTicket,
'problem': ProblemTicket,
'servicerequest' : ServiceRequest
}
def client_code(ticket_type):
factory = TicketFactory()
ticket = factory.create_ticket(ticket_type)
print(ticket.ticket_type())
if __name__ == '__main__':
client_code('incident')
client_code('problem')
client_code('servicerequest')
199
IncidentTicket has been created
ProblemTicket has been created
ServiceRequest has been created
📌 The abstract base class Ticket is defined using Python's ABC (Abstract Base Class) module.
This class has an abstract method ticket_type() . The use of the @abstractmethod decorator
indicates that any subclass of Ticket must provide an implementation for this method. This
ensures that all ticket types will have a consistent interface.
class Ticket(ABC):
@abstractmethod
def ticket_type():
pass
Let's delve deeper into the concept of Abstract Base Classes (ABCs) in
Python and how they work.
📌 Abstract Base Classes (ABCs): ABCs are a mechanism in Python for defining abstract classes
where you can't create an instance of the class itself, but you can create instances of its
subclasses. The primary purpose of ABCs is to define a set of common methods that must be
implemented by any of its subclasses. This ensures a consistent interface across all subclasses.
📌 The ABC Module: Python provides the abc module to facilitate the creation of abstract base
classes. The key components of this module are the ABC class and the abstractmethod
decorator.
📌 Defining an Abstract Base Class: To define an abstract base class, you subclass from ABC . In
the provided code, Ticket is defined as an abstract base class by inheriting from ABC :
class Ticket(ABC):
...
📌 Abstract Methods: An abstract method is a method that is declared in the abstract base class
but doesn't have any implementation. It's a way of saying, "Any class that inherits from this ABC
must provide an implementation for this method." In Python, you declare an abstract method
using the @abstractmethod decorator.
In the provided code, ticket_type is defined as an abstract method within the Ticket class:
@abstractmethod
def ticket_type():
pass
The pass statement is a placeholder, indicating that there's no implementation for this method in
the Ticket class.
200
📌 Subclassing an ABC: When you create a subclass of an ABC, you are contractually obligated to
provide implementations for all of its abstract methods. If you don't, Python will raise a
TypeError when you try to create an instance of the subclass.
class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
📌 Instantiation: You cannot create an instance of an abstract base class. If you try to do so,
Python will raise a TypeError . However, you can create instances of its subclasses, provided they
implement all the abstract methods.
For example, in the provided code, you can't create an instance of Ticket directly, but you can
create instances of IncidentTicket , ProblemTicket , or ServiceRequest .
📌 Why Use ABCs?: ABCs are a powerful tool for ensuring that a set of related classes adhere to a
particular interface. By defining an ABC, you're setting a clear contract: "Any subclass of this ABC
must implement these methods." This can make your code more robust and maintainable, as it
ensures consistency across related classes.
In the context of the provided code, using an ABC ensures that any new ticket type added in the
future will have the ticket_type method, maintaining consistency across all ticket types.
📌 The TicketFactory class is the heart of the Factory Design Pattern in this code. It has a static
method create_ticket(t_type) . Static methods, denoted by the @staticmethod decorator,
belong to the class and not any specific instance. This means they can be called on the class itself,
without creating an instance.
📌 Inside the create_ticket method, a dictionary named tickets is defined. This dictionary
maps string keys (representing ticket types) to their corresponding classes. This dictionary acts as
a registry of supported ticket types.
📌 The assert statement checks if the provided t_type exists in the tickets dictionary. If the
ticket type is not supported, it raises an AssertionError with a custom message. This is a simple
way to validate input and ensure that only supported ticket types are processed. However, in a
more robust implementation, one might use exception handling with try and except blocks.
📌 If the ticket type is valid, the method returns the corresponding class from the tickets
dictionary. Note that it returns the class itself, not an instance of the class. This is because the
client code might want to further customize the object or use class methods before instantiation.
201
📌 The client_code function demonstrates how to use the factory. It creates an instance of the
TicketFactory (though, technically, since create_ticket is a static method, this instantiation is
not necessary). It then calls the create_ticket method with a ticket type string, gets the
corresponding class, creates an instance of that class, and finally calls the ticket_type() method
on that instance.
📌 The if __name__ == '__main__': block is a common Python idiom to ensure that the code is
only executed when the script is run directly, and not when it's imported as a module. In this
block, the client_code function is called three times with different ticket types to demonstrate
the functionality of the factory.
In summary, this code provides a clean and scalable implementation of the Factory Design Pattern
in Python. It ensures that the creation of ticket objects is decoupled from the client code, allowing
for easy addition of new ticket types in the future without affecting existing code.
class Ticket(ABC):
@abstractmethod
def ticket_type():
pass
class IncidentTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class ProblemTicket(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class ServiceRequest(Ticket):
def ticket_type():
return f'{__class__.__name__} has been created'
class TicketFactory:
@staticmethod
def create_ticket(t_type):
tickets = {
'incident' : IncidentTicket,
'problem': ProblemTicket,
'servicerequest' : ServiceRequest
}
def client_code(ticket_type):
factory = TicketFactory()
ticket = factory.create_ticket(ticket_type)
202
print(ticket.ticket_type())
if __name__ == '__main__':
client_code('incident')
client_code('problem')
client_code('servicerequest')
📌 Decoupling Object Creation from its Use: The Factory pattern decouples the creation of
objects from the parts of the code that use these objects. This means that the client_code
function doesn't need to know about the specific classes ( IncidentTicket , ProblemTicket ,
ServiceRequest ). It only interacts with the TicketFactory .
Example: In the future, if a new ticket type, say FeedbackTicket , is introduced, you only need to
modify the TicketFactory by adding an entry in the tickets dictionary. The client_code
remains unchanged, demonstrating the decoupling.
📌 Centralized Object Creation: All the logic related to creating ticket objects is centralized in the
TicketFactory . This makes the codebase easier to maintain and debug. If there's an issue with
object creation or if enhancements are needed, you only have to look in one place.
Example: Suppose you decide to log every ticket creation for auditing purposes. Instead of adding
logging code in each ticket class, you can simply add it once in the TicketFactory .
📌 Flexibility in Object Creation: The Factory provides flexibility in terms of how objects are
created. This is especially beneficial when object creation is complex or involves multiple steps.
Example: Imagine a scenario where creating a ServiceRequest ticket requires additional steps,
like fetching some data from a database or an API. You can easily implement these steps in the
TicketFactory without affecting other ticket types or the client code.
📌 Consistent Error Handling: By centralizing object creation, you can also centralize error
handling. In the provided code, the assert statement checks if a given ticket type is supported.
This ensures that errors related to unsupported ticket types are handled consistently.
Example: If a developer mistakenly tries to create a ticket type called 'urgentissue', the Factory will
raise an error with the message "Ticket type 'urgentissue' is not supported". This consistent error
handling can be especially useful for debugging and user feedback.
📌 Scalability: The Factory pattern makes the system more scalable. As the system grows and
more ticket types are introduced, the Factory can easily accommodate these changes.
Example: In the future, if the system needs to support dozens of ticket types, the Factory can be
extended to fetch the supported ticket types from a configuration file or a database. This dynamic
approach would allow adding new ticket types without even touching the code.
In summary, the Factory implementation in the provided code offers benefits like decoupling,
centralized object creation and error handling, flexibility, and scalability. These benefits make the
system robust, maintainable, and ready for future enhancements.
203
So let's do one of the above case of extending TicketFactory
class
Take one of the above scenarios - Imagine a scenario where creating a ServiceRequest ticket
requires additional steps, like fetching some data from a database or an API. You can easily
implement these steps in the TicketFactory without affecting other ticket types or the client
code.
For simplicity, I'll simulate the database/API fetch with a function. Here's how you can modify the
TicketFactory to accommodate this:
class ServiceRequest(Ticket):
def __init__(self, data):
self.data = data
def ticket_type(self):
return f'{__class__.__name__} with {self.data} has been created'
class TicketFactory:
@staticmethod
def create_ticket(t_type):
tickets = {
'incident': IncidentTicket,
'problem': ProblemTicket,
'servicerequest': ServiceRequest
}
204
2. The ServiceRequest class now has an __init__ method that accepts data as an
argument. This data is used when returning the ticket type.
3. Inside the TicketFactory , we check if the t_type is 'servicerequest'. If it is, we fetch the
additional data and pass it when creating the ServiceRequest object.
With these modifications, the Factory handles the special requirements of creating a
ServiceRequest ticket without affecting other ticket types or the client code.
import logging
from abc import ABC, abstractmethod
class TicketFactory:
@staticmethod
def create_ticket(t_type):
tickets = {
'incident': IncidentTicket,
'problem': ProblemTicket,
'servicerequest': ServiceRequest
}
return tickets[t_type]()
2. Inside the TicketFactory , right before we return the ticket object, we add a logging
statement using logging.info() . This logs the creation of a ticket of a specific type.
205
With this modification, every time a ticket is created using the TicketFactory , a log message will
be generated, providing an audit trail of ticket creations. This centralized logging approach
ensures that you don't have to add logging statements in each individual ticket class, making the
code cleaner and more maintainable.
Let's demonstrate how the TicketFactory can be extended to fetch supported ticket types from
a configuration file. For simplicity, I'll use a JSON file as the configuration file, but in real-world
scenarios, this could be a database, an XML file, or any other data source.
{
"incident": "IncidentTicket",
"problem": "ProblemTicket",
"servicerequest": "ServiceRequest",
"feedback": "FeedbackTicket"
}
In this JSON file, we've added a new ticket type "feedback" mapped to a class "FeedbackTicket".
1. Python Code:
import json
from abc import ABC, abstractmethod
class FeedbackTicket(Ticket):
def ticket_type(self):
return f'{__class__.__name__} has been created'
class TicketFactory:
@staticmethod
def create_ticket(t_type):
# Load ticket types from the configuration file
with open('tickets_config.json', 'r') as file:
tickets_config = json.load(file)
2. Inside the TicketFactory , we load the ticket types from the tickets_config.json file
using the json module.
3. We then dynamically map the ticket type strings from the configuration file to the actual class
objects using Python's globals() function. This function returns a dictionary of the current
global symbol table, allowing us to fetch class references by their string names.
4. The rest of the TicketFactory remains unchanged, as it uses the dynamically constructed
tickets dictionary to create the desired ticket object.
With this approach, adding a new ticket type is as simple as updating the tickets_config.json
file and adding the corresponding class in the Python code. The TicketFactory will automatically
support the new ticket type without any modifications.
class DBConnectionFactory(ABC):
@abstractmethod
def create_connection(self):
pass
class SQLiteConnectionFactory(DBConnectionFactory):
def create_connection(self):
return sqlite3.connect('development.db')
class PostgreSQLConnectionFactory(DBConnectionFactory):
def create_connection(self):
return psycopg2.connect(database="production", user="user",
password="password", host="127.0.0.1", port="5432")
def get_factory(environment):
if environment == "development":
return SQLiteConnectionFactory()
207
elif environment == "production":
return PostgreSQLConnectionFactory()
else:
raise ValueError(f"Unknown environment: {environment}")
# Usage:
# Now, you can use this connection to query the database, etc.
📌 Code Explanation:
1. We define an abstract base class DBConnectionFactory with an abstract method
create_connection . This sets the contract that any concrete factory we create must provide
a method to create a DB connection.
3. The get_factory function is a simple utility that returns the appropriate factory based on
the environment. This function can be expanded as you add more environments or
databases.
4. In the usage section, we determine the environment (this could be from an environment
variable, configuration file, etc.), get the appropriate factory, and then create the database
connection.
📌 Benefits:
1. The main application code doesn't need to know the specifics of how to connect to each
database. It just asks the factory for a connection.
2. If you ever need to change how the connection is made for a specific environment or if you
want to introduce a new type of database, you only need to modify or add a new factory. The
main application code remains untouched.
📌 Isolation of Concerns: This approach ensures that the logic for creating a database
connection is isolated from the rest of the application. This makes the codebase more
maintainable and reduces the risk of introducing bugs when making changes related to database
connections.
📌 Scalability: In the future, if you decide to introduce connection pooling, caching, or any other
enhancements, you can do so within the respective factory. For instance, if you decide to use a
connection pool for the PostgreSQL connections in production, you can integrate that logic within
the PostgreSQLConnectionFactory without affecting the SQLite connections or any other part of
the application.
📌 Under-the-hood: When you request a connection from a database, there's a lot happening
behind the scenes. The system needs to establish a TCP connection, authenticate, and set up the
session. This can be resource-intensive. By managing connections efficiently (like reusing them
from a pool), you can significantly improve the performance of your application. The factory
208
method pattern doesn't directly deal with these concerns, but by isolating the creation logic, it
provides a centralized place to manage them.
📌 Testing and Mocking: Another advantage of this approach is that it makes testing easier.
When writing unit tests, you can create a mock factory that produces mock database connections,
allowing you to test your data processing logic without actually hitting a real database.
Advanced Use-case:
Let's say you want to introduce connection pooling for the PostgreSQL database in the production
environment. Here's how you can modify the PostgreSQLConnectionFactory to use a connection
pool:
class PostgreSQLConnectionFactory(DBConnectionFactory):
def __init__(self):
self.minconn = 5
self.maxconn = 20
self.connection_pool = None
def create_connection(self):
if not self.connection_pool:
self.connection_pool = pool.SimpleConnectionPool(self.minconn,
self.maxconn,
📌 Code Explanation:
1. We've added a connection pool to the PostgreSQLConnectionFactory . The
create_connection method now fetches a connection from the pool instead of creating a
new one every time.
2. The release_connection method is used to return a connection back to the pool once
you're done with it.
3. The connection pool is initialized lazily, i.e., it's created the first time you request a
connection. This ensures that if your application never needs a PostgreSQL connection, the
pool is never created.
📌 Usage Consideration: With connection pooling, it's crucial to remember to return the
connection to the pool once you're done with it. Otherwise, you'll exhaust the pool over time. This
is a responsibility that the main application code must bear, but the benefit is a more efficient use
of resources, especially under high load.
209
In conclusion, using a factory method for database connections in batch data pipeline applications
provides flexibility, maintainability, and efficiency. It abstracts away the specifics of connecting to
different databases, allowing you to focus on the core logic of your application.
Proxy Design Pattern in Python is a structural design pattern that lets you provide a substitute or
placeholder for another object. A proxy controls access to the original object, allowing you to
perform something either before or after the request gets through to the original object.
📌 This means that instead of directly interacting with the original object, you interact with the
proxy, which then decides how and when to forward the request to the original object.
📌 Use Cases:
1. Lazy Initialization: When an object is heavy and consumes a lot of resources, you might not
want to create it unless it's really needed. The proxy can delay the instantiation of the original
object until it's absolutely necessary.
2. Access Control: If you want to restrict access to the original object based on certain
conditions, a proxy can be used. For instance, checking if a user has the necessary
permissions before allowing a certain operation.
3. Logging and Monitoring: Before or after forwarding a request to the original object, the
proxy can log details about the request, which can be useful for debugging or monitoring
purposes.
4. Performance Measurement: The proxy can record the time it takes to execute operations,
giving insights into performance bottlenecks.
210
Let's see an example WITH and then WITHOUT the "Proxy
Design Pattern in Python"
class Database:
def __init__(self):
self.data = {}
Now, let's say we have a client code that interacts with this database:
db = Database()
db.insert("key1", "value1")
print(db.read("key1"))
📌 The above code works fine, but there are some issues:
📌 There's no control over who can access the database. Any part of the code can read or write to
the database directly.
📌 If we want to add some logging mechanism to log every read operation, we'd have to modify
the Database class, which violates the Open/Closed principle.
📌 If we want to add a caching mechanism or any other pre/post-processing, we'd again have to
modify the Database class.
class DatabaseProxy:
def __init__(self, database):
self.database = database
self.access_count = 0
211
Now, the client code will interact with the DatabaseProxy instead of the Database directly:
db = Database()
proxy = DatabaseProxy(db)
proxy.insert("key1", "value1")
print(proxy.read("key1"))
print(proxy.read("key2"))
📌 With this approach, we've added a layer of control. The client code interacts with the proxy,
and the proxy decides how and when to forward the request to the original Database object.
📌 We've added a logging mechanism (the access count) without modifying the original Database
class.
📌 In the future, if we want to add more features like caching, we can easily do that in the
DatabaseProxy class without touching the Database class.
📌 The Database class remains unchanged, and we've adhered to the Open/Closed principle.
In conclusion, the Proxy Design Pattern provides a way to control access to an object by acting as
an intermediary. This pattern is especially useful when we want to add additional functionalities to
an object without modifying its structure.
Let's break down the refactored code with the Proxy Design
Pattern and see how it addresses the issues of the original
code.
Original Issues:
📌 No control over who can access the database.
📌 No logging mechanism without modifying the Database class.
class DatabaseProxy:
def __init__(self, database):
self.database = database
self.access_count = 0
212
Detailed Explanation:
📌 Control Over Access: In the refactored code, the DatabaseProxy acts as an intermediary
between the client and the actual Database object. This means that any client wanting to interact
with the database will have to go through the proxy. By doing this, we can control, restrict, or
modify the access as needed. For instance, if we wanted to limit the number of reads to the
database, we could easily implement that logic within the proxy.
📌 Logging Mechanism: One of the issues with the original code was the inability to add a
logging mechanism without modifying the Database class. With the proxy in place, we've
introduced an access_count attribute that keeps track of the number of times the read method
is called. Every time a client tries to read from the database, the proxy increments this count and
prints it. This is a simple form of logging, and it's implemented without touching the original
Database class. If we wanted more advanced logging, such as timestamped logs or logs for
different types of operations, we could easily expand upon this within the proxy.
📌 Ease of Adding Pre/Post-Processing: The proxy pattern shines when we think about adding
additional functionalities around the main operation. For instance, if we wanted to introduce a
caching mechanism, we could implement it within the proxy. Before forwarding a read request to
the actual database, the proxy could check if the data is already in the cache. If it is, return the
cached data; if not, fetch from the database, store it in the cache, and then return it. This caching
logic can be added to the proxy without altering the Database class. Similarly, any other pre/post-
processing can be introduced in the proxy, ensuring the original class remains untouched.
In essence, the Proxy Design Pattern has provided a flexible and scalable structure. It allows for
the addition of functionalities and controls without modifying the core object, adhering to the
Open/Closed principle of software design.
📌 Let's consider a real-life use-case: Imagine you're building a system for a library. Books in the
library can be either physical or digital. Access to digital books requires a special membership. We
can use the Proxy Design Pattern to control access to digital books.
class Book:
def __init__(self, title, content):
self.title = title
self.content = content
def display(self):
return self.content
class DigitalBookProxy:
def __init__(self, book):
self._book = book
self._authenticated = False
def display(self):
if self._authenticated:
213
return self._book.display()
else:
return "Access Denied! Authenticate first."
# Usage
book = Book("Digital Python", "This is the content of the digital book.")
proxy = DigitalBookProxy(book)
2. The DigitalBookProxy acts as a proxy for the Book class. It has an authenticate method
to check if the user has access to the digital book.
3. In the usage section, we create a digital book and a proxy for it. Without authentication, the
proxy denies access. Once authenticated with the correct password, the proxy grants access
to the book's content.
📌 Under the hood: - The proxy pattern here decouples the authentication logic from the Book
class. This means the Book class remains focused on its primary responsibility: representing a
book. - The proxy acts as an intermediary and adds an additional layer of control, in this case,
authentication. This separation of concerns ensures that each class adheres to the Single
Responsibility Principle, a key principle in object-oriented design.
In essence, the Proxy Design Pattern provides a way to add additional behaviors or controls to
object access without modifying the object's actual implementation. This makes the system more
modular and easier to maintain.
📌 Scenario: A video streaming platform where: 1. Videos might not be available in all regions. 2.
Users need to have an active subscription to view premium content. 3. We want to lazily load the
video only when it's actually requested to save bandwidth.
class Video:
def __init__(self, title, content):
self.title = title
self.content = content
def play(self):
return f"Playing {self.title}: {self.content}"
214
class VideoProxy:
def __init__(self, title):
self.title = title
self._video = None
self._region = "US"
self._premium_content = ["Exclusive Show", "VIP Movie"]
self._subscribed = False
def subscribe(self):
self._subscribed = True
def play(self):
if self.title in self._premium_content and not self._subscribed:
return "This is premium content. Please subscribe to view."
if not self._video:
# Simulating lazy loading. In a real-world scenario, this might
involve fetching the video from a server.
content = f"Content of {self.title}"
self._video = Video(self.title, content)
return self._video.play()
# Usage
proxy = VideoProxy("Exclusive Show")
print(proxy.play()) # This is premium content. Please subscribe to view.
proxy.subscribe()
print(proxy.play()) # Playing Exclusive Show: Content of Exclusive Show.
📌 What the code does: 1. The Video class represents a video with a title and content. It has a
play method to simulate playing the video. 2. The VideoProxy class acts as a proxy for the
Video class. It handles region restrictions, subscription checks, and lazy loading of the video. 3. In
the usage section, we demonstrate the proxy's behavior for premium content and region-
restricted content.
📌 Under the hood: - The proxy pattern allows us to separate concerns. The Video class remains
simple and focused on representing a video. The complexities of region restrictions, subscription
checks, and lazy loading are handled by the proxy. - Lazy loading is implemented by only creating
an instance of the Video class when the play method is called. This simulates the behavior of
only fetching/loading the video when it's actually requested. - The proxy pattern provides
flexibility. If in the future, more rules or behaviors need to be added (e.g., age restrictions, content
warnings), they can be added to the proxy without altering the Video class.
215
This example showcases how the Proxy Design Pattern can be used to manage complexities in a
system, ensuring that each component remains focused on its primary responsibility.
📌 Scenario: A cloud storage system where: 1. Large files are lazily loaded to save bandwidth and
memory. 2. Users need authentication to download files. 3. There's a rate limit on how often a
user can download files. 4. All download requests are logged for audit purposes.
import time
class File:
def __init__(self, name, content):
self.name = name
self.content = content
def download(self):
return f"Downloading {self.name}: {self.content}"
class FileProxy:
def __init__(self, name):
self.name = name
self._file = None
self._last_access_time = None
self._authenticated = False
self._rate_limit_seconds = 10
def download(self):
current_time = time.time()
if not self._authenticated:
return "Authentication required to download the file."
if not self._file:
# Simulating lazy loading. In a real-world scenario, this might
involve fetching the file from a server.
216
content = f"Content of {self.name}"
self._file = File(self.name, content)
self._last_access_time = current_time
self._log_request()
return self._file.download()
def _log_request(self):
# Simulating logging. In a real-world scenario, this might involve
writing to a database or logging service.
print(f"File {self.name} was downloaded at
{time.ctime(self._last_access_time)}")
# Usage
proxy = FileProxy("BigDataFile.txt")
print(proxy.download()) # Authentication required to download the file.
time.sleep(5)
print(proxy.download()) # Rate limit exceeded. Please wait before downloading
again.
📌 What the code does: 1. The File class represents a file with a name and content. It has a
download method to simulate downloading the file. 2. The FileProxy class acts as a proxy for
the File class. It handles authentication, rate limiting, lazy loading, and logging. 3. In the usage
section, we demonstrate the proxy's behavior for authentication, rate limiting, and logging.
📌 Under the hood: - The proxy pattern allows us to separate concerns. The File class remains
simple and focused on representing a file. The complexities of authentication, rate limiting, lazy
loading, and logging are handled by the proxy. - Lazy loading is implemented by only creating an
instance of the File class when the download method is called. This simulates the behavior of
only fetching/loading the file when it's actually requested. - The proxy pattern provides flexibility. If
in the future, more rules or behaviors need to be added (e.g., file sharing, encryption), they can be
added to the proxy without altering the File class.
This example demonstrates how the Proxy Design Pattern can be effectively used to manage
complexities in a cloud storage system, ensuring modularity and maintainability.
217
📌 Scenario: A smart home system where: 1. Devices need to be accessed securely. 2. The state
of devices (e.g., on/off, temperature) is cached to reduce unnecessary communication and save
energy. 3. All device control actions are logged for security and debugging purposes.
class SmartDevice:
def __init__(self, device_name):
self.device_name = device_name
self.state = "off"
def toggle(self):
self.state = "on" if self.state == "off" else "off"
return f"{self.device_name} turned {self.state}"
class SmartDeviceProxy:
def __init__(self, device_name):
self.device_name = device_name
self._device = None
self._state_cache = "off"
self._authenticated = False
def toggle(self):
if not self._authenticated:
return "Authentication required to control the device."
if not self._device:
self._device = SmartDevice(self.device_name)
def _log_action(self):
# Simulating logging. In a real-world scenario, this might involve
writing to a database or logging service.
print(f"{self.device_name} was toggled to {self._state_cache} at
{time.ctime()}")
# Usage
proxy = SmartDeviceProxy("LivingRoomLight")
print(proxy.toggle()) # Authentication required to control the device.
📌 Under the hood: - The proxy pattern allows us to separate concerns. The SmartDevice class
remains simple and focused on representing a device. The complexities of authentication, state
caching, and logging are handled by the proxy. - State caching is implemented by maintaining a
_state_cache variable in the proxy. This simulates the behavior of reducing unnecessary
communication with the actual device if we already know its state. - The proxy pattern provides
flexibility. If in the future, more rules or behaviors need to be added (e.g., device scheduling,
energy-saving modes), they can be added to the proxy without altering the SmartDevice class.
This example illustrates how the Proxy Design Pattern can be effectively used in a smart home
context, ensuring security, efficiency, and maintainability.
Let's delve into a scenario involving an API server that processes requests for user data. In such
systems, it's crucial to monitor and log requests for performance analysis, debugging, and security
audits. The Proxy Design Pattern can be employed to seamlessly integrate this logging and
monitoring functionality.
📌 Scenario: An API server where: 1. User data is fetched based on user IDs. 2. Every request is
logged with its timestamp, user ID, and response time. 3. The system monitors and logs any
suspiciously frequent requests to prevent potential abuse.
import time
class APIServer:
def __init__(self):
# Simulating a small database of user data
self._database = {
"123": "Data for user 123",
"456": "Data for user 456",
"789": "Data for user 789"
}
class APIServerProxy:
def __init__(self):
self._server = APIServer()
self._request_timestamps = {}
data = self._server.fetch_data(user_id)
end_time = time.time()
self._request_timestamps[user_id] = end_time
self._log_request(user_id, f"Fetched in {end_time - start_time:.4f}
seconds")
return data
# Usage
proxy = APIServerProxy()
print(proxy.fetch_data("123")) # [current time] User ID: 123 - Fetched in 0.0001
seconds. Data for user 123.
print(proxy.fetch_data("999")) # [current time] User ID: 999 - Fetched in 0.0001
seconds. User not found.
print(proxy.fetch_data("123")) # [current time] User ID: 123 - Denied due to
suspicious activity. Request denied.
📌 What the code does: 1. The APIServer class simulates a simple API server with a small
database of user data. It has a fetch_data method to retrieve user data based on user IDs. 2. The
APIServerProxy class acts as a proxy for the APIServer class. It handles logging and monitoring
of requests. 3. In the usage section, we demonstrate the proxy's behavior for logging, monitoring,
and detecting suspiciously frequent requests.
📌 Under the hood: - The proxy pattern allows us to separate concerns. The APIServer class
remains simple and focused on serving user data. The complexities of logging and monitoring are
handled by the proxy. - The monitoring is implemented by maintaining a _request_timestamps
dictionary in the proxy. This dictionary tracks the last request time for each user ID, allowing the
system to detect and prevent potential abuse. - The proxy pattern provides flexibility. If in the
future, more rules or behaviors need to be added (e.g., IP-based rate limiting, error logging), they
can be added to the proxy without altering the APIServer class.
This example showcases how the Proxy Design Pattern can be effectively used in an API server
context, ensuring robust logging and monitoring capabilities.
220
Example 6 - Real life use case of Proxy Design Pattern in
Python for Performance Measurement
The proxy can record the time it takes to execute operations, giving insights into performance
bottlenecks.
Let's explore a scenario involving a complex mathematical computation system, such as one used
for scientific simulations or financial modeling. In such systems, understanding the performance
of various computations is crucial for optimization and resource allocation. The Proxy Design
Pattern can be employed to seamlessly integrate performance measurement.
import time
import math
class ComputationEngine:
def heavy_calculation(self, x):
# Simulating a heavy computation
time.sleep(2)
return math.exp(x)
class ComputationEngineProxy:
def __init__(self):
self._engine = ComputationEngine()
221
print(f"Warning: {operation} is taking longer than expected!")
# Usage
proxy = ComputationEngineProxy()
print(proxy.heavy_calculation(5)) # heavy_calculation took 2.00 seconds to
complete.
print(proxy.moderate_calculation(3)) # moderate_calculation took 1.00 seconds to
complete.
📌 What the code does: 1. The ComputationEngine class simulates a system that performs
complex mathematical operations. It has methods like heavy_calculation and
moderate_calculation to simulate computations of varying intensities. 2. The
ComputationEngineProxy class acts as a proxy for the ComputationEngine class. It measures
and logs the time taken for each computation. 3. In the usage section, we demonstrate the proxy's
behavior for performance measurement and warnings.
📌 Under the hood: - The proxy pattern allows us to separate concerns. The ComputationEngine
class remains focused on performing mathematical operations. The complexities of performance
measurement are handled by the proxy. - The performance measurement is implemented using
Python's built-in time module. The start and end times of each operation are recorded, and the
difference gives the duration. - The proxy pattern provides flexibility. If in the future, more rules or
behaviors need to be added (e.g., memory usage tracking, parallel computation), they can be
added to the proxy without altering the ComputationEngine class.
This example illustrates how the Proxy Design Pattern can be effectively used in a computation-
intensive context, ensuring robust performance measurement capabilities.
222
The singleton pattern offers a way to implement a class from which you can only create one
object, hence the name singleton.
What is interesting is that it is useful when we need to create one and only one object, for
example, to store and maintain a global state for our program. In Python, this pattern can be
implemented using some special built-in features. The singleton pattern restricts the instantiation
of a class to one object, which is useful when you need one object to coordinate actions for the
system. The basic idea is that only one instance of a particular class is created for the needs of the
program. To ensure that this works, we need mechanisms that prevent the instantiation of the
class more than once and also prevent cloning.
📌 The Singleton pattern is often used for logging, driver objects, caching, thread pools, and
database connections. For instance, if you have a configuration manager in a system, you might
want to ensure that there's only one instance of this manager so that you don't end up with
conflicting configurations.
📌 One of the reasons some consider Singleton as an anti-pattern is because it can introduce
global state into an application. Global state is often seen as undesirable because it can make the
system harder to reason about, and it can introduce subtle bugs if not managed carefully.
📌 In Python, the Singleton pattern can be implemented in several ways due to its dynamic
nature. Some of the common methods include:
For our discussion, let's focus on the metaclass approach, as it's one of the most Pythonic ways
to implement the Singleton pattern.
class ConfigurationManager:
def __init__(self):
self._config_values = {}
223
def set(self, key, value):
self._config_values[key] = value
# Usage
config1 = ConfigurationManager()
config1.set("api_key", "123456")
config2 = ConfigurationManager()
print(config2.get("api_key")) # This will print None, not "123456"
2. Each instance has its own state, so changes made in one instance are not reflected in others.
3. This can lead to inconsistent states across the application, especially if different parts of the
code are using different instances of the configuration manager.
class ConfigurationManager:
_instance = None
def __new__(cls):
if not cls._instance:
cls._instance = super(ConfigurationManager, cls).__new__(cls)
cls._instance._config_values = {}
return cls._instance
# Usage
config1 = ConfigurationManager()
config1.set("api_key", "123456")
config2 = ConfigurationManager()
print(config2.get("api_key")) # This will now print "123456"
2. All parts of the code that create an instance of ConfigurationManager will get the same
instance, ensuring that the state is consistent across the application.
224
3. This prevents the possibility of having different configurations in different parts of the code,
leading to more predictable behavior.
📌 Conclusion
The Singleton pattern is a powerful tool for ensuring that a class has only one instance and
provides a global point of access to that instance. This is especially useful in scenarios like
configuration management, logging, or any other task where it's crucial to maintain a consistent
state across the application.
Let's dive deep into the refactored code and understand how
the Singleton Design Pattern addresses the issues of the
original code.
📌 Singleton Mechanism
In the refactored code, the Singleton pattern is implemented using the __new__ method. The
__new__ method is responsible for creating and returning a new instance of a class. By overriding
this method, we can control the instantiation process of the class.
def __new__(cls):
if not cls._instance:
cls._instance = super(ConfigurationManager, cls).__new__(cls)
cls._instance._config_values = {}
return cls._instance
2. If it doesn't exist, we create a new instance using the super() function and store it in the
_instance class variable.
4. Finally, we return the _instance , whether it was just created or already existed.
2. Inconsistent State Across Instances: In the original code, since multiple instances could be
created, setting a value in one instance wouldn't reflect in another. With the Singleton
pattern, since there's only one instance, any change made to this instance is reflected
everywhere. This ensures a consistent state across the application. For example, setting the
"api_key" in config1 and then retrieving it from config2 gives the expected result because
both config1 and config2 are actually references to the same instance.
225
3. Preventing Unintended Behavior: Without the Singleton pattern, different parts of the code
could unintentionally work with different instances of the configuration manager, leading to
unpredictable behavior. With the Singleton pattern, all parts of the code that use
ConfigurationManager are guaranteed to work with the same instance, ensuring
predictable and consistent behavior.
📌 Additional Benefits
1. Memory Efficiency: Since only one instance of the class is created, memory usage is
optimized. This can be especially beneficial in larger applications where many parts of the
code might need access to the configuration manager.
2. Global Access Point: The Singleton instance acts as a global access point, ensuring that all
parts of the application can access the configuration without the need to pass the instance
around.
📌 Conclusion
The Singleton Design Pattern in the refactored code ensures that only one instance of the
ConfigurationManager class exists and provides a global point of access to this instance. This
design choice directly addresses the issues present in the original code, leading to a more
consistent, predictable, and efficient application behavior.
class SingletonMeta(type):
_instances = {}
class DatabaseConnection(metaclass=SingletonMeta):
def __init__(self, connection_string):
self.connection_string = connection_string
self.connection = self._create_connection()
def _create_connection(self):
# Here, you'd typically establish a connection to the database.
# For the sake of this example, we'll just simulate it.
return f"Connected to {self.connection_string}"
# Usage
db1 = DatabaseConnection("Server1")
db2 = DatabaseConnection("Server2")
226
📌 In the above code, SingletonMeta is our metaclass that ensures only one instance of any
class that uses it as a metaclass is created. The DatabaseConnection class is a hypothetical class
representing a connection to a database. We've used the Singleton pattern here to ensure that we
have only one connection to the database, no matter how many times we try to instantiate the
DatabaseConnection class.
📌 The __call__ method in the metaclass is a special method that gets called when an object is
instantiated. Here, we're overriding it to check if an instance of the class already exists. If it does,
we return that instance; otherwise, we create a new one.
📌 The DatabaseConnection class has a method query that simulates executing a SQL query on
the database. When we create two objects, db1 and db2 , with different connection strings and
check if they're the same object using the is operator, it returns True . This confirms that our
Singleton implementation is working as expected.
In summary, the Singleton pattern can be a powerful tool when used judiciously. It's essential to
understand the implications of introducing global state and to weigh the pros and cons in the
context of the specific problem you're trying to solve.
class SingletonMeta(type):
_instances = {}
Here, SingletonMeta inherits from the built-in type class, making it a metaclass. The
_instances dictionary is a class-level attribute that will store instances of classes that use this
metaclass.
227
The __call__ method is a special method in Python. For classes, it's responsible for creating and
returning a new instance of the class. By overriding this method in our metaclass, we can control
the instantiation of classes that use SingletonMeta as their metaclass.
Here, we're checking if the class ( cls ) already has an instance stored in the _instances
dictionary. If it doesn't, it means this is the first time we're trying to create an instance of this class.
If the class doesn't have an existing instance, we create a new one. We use the super() function
to call the original __call__ method of the base type class, which will create and return a new
instance of the class.
cls._instances[cls] = instance
After creating the new instance, we store it in the _instances dictionary using the class ( cls ) as
the key. This ensures that the next time we try to create an instance of this class, we'll find it in the
_instances dictionary and return the existing instance instead of creating a new one.
return cls._instances[cls]
Finally, we return the instance of the class from the _instances dictionary. If this is the first time
we're creating an instance, it'll be the new instance we just created. If an instance already exists,
it'll be the existing instance.
To summarize, the SingletonMeta metaclass uses the _instances dictionary to keep track of
instances of classes that use it as their metaclass. By overriding the __call__ method, it ensures
that only one instance of each class is created and stored. Any subsequent attempts to create an
instance of the class will return the existing instance from the _instances dictionary.
The mechanism that produces these class objects is called a metaclass. The default metaclass in
Python is type .
A metaclass is a "class of a class" that defines how a class behaves. A metaclass allows you to
define properties or methods that are common to a group of classes.
228
3. type : This is the built-in metaclass in Python. It's responsible for taking the class body and
turning it into a class object. When you use the class keyword in Python, type is working
behind the scenes to create the class. When you define a simple class like MyClass above,
behind the scenes, Python is using type to create the class object.
class MyClass:
pass
# This will print "<class 'type'>", because the type of MyClass is 'type'
print(type(MyClass))
You can also use type directly to create new class objects. The type function can be called with
three arguments: the name of the new class, a tuple containing the base classes (for inheritance),
and a dictionary containing attributes and methods for the class.
For example:
# This creates a new class named 'Foo' with a single attribute 'bar'
instance = Foo()
print(instance.bar) # This will print "True"
📌 Since everything in Python is an object, classes themselves are also objects. They are instances
of a higher-order class called a metaclass . This might sound a bit recursive, but think of
metaclasses as "classes of classes."
📌 The built-in type is the most commonly used metaclass. It's responsible for taking a class
definition and turning it into a class object. When you define a simple class like MyClass above,
behind the scenes, Python is using type to create the class object.
class MyClass:
pass
In the second example, we're using type directly to create a new class object. The first argument
is the name of the class, the second is a tuple containing base classes (for inheritance), and the
third is a dictionary containing class attributes.
📌 Custom Metaclasses:
229
If type is the default metaclass that creates class objects, then why would we ever need a custom
metaclass like SingletonMeta ? The answer is customization. By creating a custom metaclass, we
can customize class creation, modify class attributes, or, as in our Singleton example, control the
instantiation of the class.
When we say:
class SingletonMeta(type):
pass
We're defining a new metaclass SingletonMeta that inherits from type . By inheriting from
type , SingletonMeta gets all the basic mechanisms to be a metaclass but can also introduce
custom behavior, like ensuring a single instance.
In essence, metaclasses allow us to tap into the class creation process, providing a layer of meta-
programming in Python. They're a powerful tool, but they also introduce complexity, so they
should be used judiciously.
1. Configuration is loaded only once, even if the manager is accessed from various parts of the
application.
import os
import json
class SingletonMeta(type):
_instances = {}
class ConfigManager(metaclass=SingletonMeta):
def __init__(self, config_path):
self.config_path = config_path
self._config = {}
self._load_config()
def _load_config(self):
# Load configuration from a file
if os.path.exists(self.config_path):
with open(self.config_path, 'r') as file:
self._config = json.load(file)
230
# Override with any environment variables
for key, value in os.environ.items():
self._config[key] = value
def _save_config(self):
with open(self.config_path, 'w') as file:
json.dump(self._config, file)
# Usage
config1 = ConfigManager("app_config.json")
config2 = ConfigManager("another_config.json")
📌 The _load_config method reads the configuration from a file and then checks for
environment variables that might override these settings.
📌 The get and set methods allow you to retrieve and update configuration values. For
simplicity, every time a value is set, it's also written back to the configuration file.
📌 Even though we tried to instantiate ConfigManager with two different configuration paths,
both config1 and config2 refer to the same object due to the Singleton pattern.
This example demonstrates how a Singleton can be useful in managing global state, like
configuration, in a consistent and controlled manner across a large application.
231
A Singleton-based resource pool ensures that:
1. The entire application uses the same pool, preventing over-allocation of resources.
import queue
import sqlite3
from contextlib import contextmanager
class SingletonMeta(type):
_instances = {}
class DatabaseConnectionPool(metaclass=SingletonMeta):
def __init__(self, db_name, max_size=5):
self._db_name = db_name
self._pool = queue.Queue(max_size)
for _ in range(max_size):
self._pool.put(sqlite3.connect(db_name))
@contextmanager
def get_connection(self):
conn = self._pool.get()
try:
yield conn
finally:
self._pool.put(conn)
# Usage
pool1 = DatabaseConnectionPool("my_database.db")
pool2 = DatabaseConnectionPool("another_database.db")
232
📌 Even though we tried to instantiate DatabaseConnectionPool with two different database
names, both pool1 and pool2 refer to the same object due to the Singleton pattern.
This example showcases how a Singleton can be instrumental in managing and reusing expensive
resources across an application, ensuring efficient utilization and consistent behavior.
1. The entire application uses the same event manager, facilitating communication between all
modules.
2. Events are dispatched to all interested listeners without any module knowing about the
others.
class SingletonMeta(type):
_instances = {}
class EventManager(metaclass=SingletonMeta):
def __init__(self):
self._listeners = {}
# Usage
event_mgr1 = EventManager()
event_mgr2 = EventManager()
233
def on_user_created(user):
print(f"User {user['name']} was created with ID {user['id']}!")
def notify_admin(user):
print(f"Admin notified about the creation of user {user['name']}.")
📌 In this example, EventManager manages a list of listeners (callbacks) for various events.
📌 The subscribe method allows different parts of the application to express interest in specific
events by registering callback functions.
📌 The emit method allows any part of the application to broadcast (emit) an event. When this
happens, all registered listeners for that event get called.
📌 Even though we tried to instantiate EventManager twice, both event_mgr1 and event_mgr2
refer to the same object due to the Singleton pattern.
This example illustrates how a Singleton can be pivotal in creating a decoupled, event-driven
architecture, allowing different parts of an application to interact seamlessly without direct
dependencies.
import logging
class SingletonMeta(type):
_instances = {}
class CentralizedLogger(metaclass=SingletonMeta):
def __init__(self, log_file="app.log"):
234
self._logger = logging.getLogger("CentralizedLogger")
self._logger.setLevel(logging.DEBUG) # Log all levels
# Create formatter
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %
(message)s')
# Usage
logger1 = CentralizedLogger()
logger2 = CentralizedLogger()
📌 In this example, CentralizedLogger sets up a logger with two handlers: one for the console
and one for a file. The log format, log file, and other configurations are centralized in this class.
📌 The log method provides a simplified interface to log messages at different levels. It internally
uses the appropriate logging method based on the provided log level.
📌 Even though we tried to instantiate CentralizedLogger twice, both logger1 and logger2
refer to the same object due to the Singleton pattern.
📌 Centralized Log Management: If different components of your application initialize their own
loggers, managing where logs are written, rotating log files, or changing log levels can become
cumbersome. A Singleton logger centralizes these concerns, making management easier.
📌 Stateful Logging Operations: In some advanced scenarios, you might want your logger to
maintain state, such as counting the number of error messages logged. A Singleton logger ensures
that this state is maintained consistently across the application.
📌 Ease of Modification: If you decide to change the logging behavior, format, or add additional
handlers (e.g., sending critical errors to an alerting system), you only need to make changes in one
place. This centralized approach simplifies maintenance and ensures changes are consistently
applied.
In summary, the Singleton pattern, when applied to logging, provides a solution to the challenges
of consistency, resource management, performance, and maintainability. It ensures that the entire
application uses a unified logging approach, leading to cleaner, more predictable, and easier-to-
manage log output.
import threading
import queue
236
class SingletonMeta(type):
_instances = {}
class ThreadPoolManager(metaclass=SingletonMeta):
def __init__(self, num_threads):
self.tasks = queue.Queue()
for _ in range(num_threads):
thread = threading.Thread(target=self._worker)
thread.daemon = True # to let main thread exit even if workers are
blocking
thread.start()
def _worker(self):
while True:
func, args, kwargs = self.tasks.get()
try:
func(*args, **kwargs)
except Exception as e:
print(f"Thread error: {e}")
finally:
self.tasks.task_done()
# Usage
pool1 = ThreadPoolManager(5)
pool2 = ThreadPoolManager(10)
def greet(name):
print(f"Hello, {name}!")
📌 The _worker method continuously fetches tasks from the tasks queue and executes them. If
there's an exception while executing a task, it's caught and printed.
237
📌 The submit method allows you to submit new tasks to the thread pool. These tasks are added
to the tasks queue and will be picked up by one of the worker threads.
This example demonstrates how a Singleton can be instrumental in managing a shared resource,
like a thread pool, across an application. It ensures that threads are efficiently utilized, and tasks
are executed concurrently without the overhead of constantly creating and destroying threads.
Let's consider a scenario where you want a centralized cache system that:
Uses a simple eviction strategy to ensure the cache doesn't grow indefinitely.
class SingletonMeta(type):
_instances = {}
class FunctionCache(metaclass=SingletonMeta):
def __init__(self, max_size=100):
self.cache = {}
self.max_size = max_size
def _evict(self):
# Simple eviction strategy: remove a random item
# In a real-world scenario, you'd likely use a more sophisticated
strategy
self.cache.pop(next(iter(self.cache)))
# Usage
cache1 = FunctionCache(max_size=50)
cache2 = FunctionCache(max_size=200)
@cache2.memoize
def another_expensive_operation(x):
return x ** 2 # Another expensive operation
📌 In this example, FunctionCache provides a memoize decorator that caches the results of
function calls. The cache key is derived from the function, its arguments, and keyword arguments.
📌 The _evict method provides a simple eviction strategy. When the cache exceeds its
max_size , it removes a random item. In a real-world scenario, you might implement a Least
Recently Used (LRU) or another eviction strategy.
📌 Even though we tried to instantiate FunctionCache with different max sizes, both cache1 and
cache2 refer to the same object due to the Singleton pattern.
This example showcases how a Singleton can be instrumental in providing a centralized caching
mechanism, ensuring consistent cache behavior and efficient resource utilization across an
application.
Imagine you're building an application that automates various tasks on a website. Initializing a
new browser driver for every task can be time-consuming and resource-intensive. Instead, you can
use a Singleton pattern to ensure that the entire application uses a single instance of the browser
driver.
class BrowserDriver(metaclass=SingletonMeta):
def __init__(self):
self.driver = webdriver.Chrome() # Initialize the Chrome browser driver
def close(self):
self.driver.quit()
# Usage
driver1 = BrowserDriver()
driver2 = BrowserDriver()
📌 In this example, BrowserDriver wraps around the Selenium's Chrome browser driver. It
provides a method get_page to navigate to a URL and fetch the page's content.
📌 Even though we tried to instantiate BrowserDriver twice, both driver1 and driver2 refer
to the same object due to the Singleton pattern.
📌 The close method ensures that the browser is closed properly when the application is done
using it.
This example demonstrates how a Singleton can be instrumental in managing a shared resource,
like a browser driver, across an application. It ensures efficient utilization of resources and
consistent behavior, as all parts of the application interact with the same browser instance.
240
🐍🚀 State Design Pattern in Python 🐍🚀
📌 The State Design Pattern is a behavioral design pattern that allows an object to change its
behavior when its internal state changes. This pattern involves encapsulating varying behavior for
the same routine in different state classes. The primary objective is to make a system more
maintainable and organized by separating concerns.
It is a behavioral design pattern. You can use the state pattern to implement state-specific
behavior in which objects can change their functionality at runtime. You can use it to avoid using
conditional statements when changing an object's behavior based on its state. In the state pattern,
you should encapsulate) different states in separate State classes. The original class keeps a
reference to a state object based on its current state rather than using conditional statements to
implement state-dependent functionality.
1) Context - it is the original class of our application. It maintains a reference to one of the
concrete states on which its behavior depends. It also has a method to modify the internal state.
241
2) State interface - All supported states share the same state interface. Only the state interface
allows Context to communicate with state objects. Context can only communicate with state
objects via the state interface.
3) Concrete states - For each state, these objects implement the 'State' interface. These are the
main objects which contain the state-specific methods.
📌 Use Cases: - Text editors can have states like "Insert Mode" or "Command Mode", each with
different behavior for the same keypress. - A TCP connection can have states like "Established",
"Listen", "Closed", and actions on the connection (like send or close ) behave differently
depending on the state.
📌 The State Design Pattern can be seen as a strategy pattern but for situations where the
strategy can change dynamically during the lifetime of the object.
class VideoPlayer:
def __init__(self):
self.state = "Stopped"
def play(self):
if self.state == "Stopped":
print("Starting video from the beginning.")
self.state = "Playing"
elif self.state == "Playing":
print("Video is already playing.")
elif self.state == "Paused":
print("Resuming video.")
self.state = "Playing"
elif self.state == "Buffering":
print("Wait until buffering completes.")
def pause(self):
if self.state == "Playing":
print("Pausing video.")
self.state = "Paused"
elif self.state == "Paused":
print("Video is already paused.")
elif self.state == "Stopped":
print("Video is stopped. Play it first.")
elif self.state == "Buffering":
print("Wait until buffering completes.")
242
def stop(self):
if self.state in ["Playing", "Paused", "Buffering"]:
print("Stopping video.")
self.state = "Stopped"
else:
print("Video is already stopped.")
def buffer(self):
if self.state == "Playing":
print("Buffering video. Please wait.")
self.state = "Buffering"
else:
print("Buffering only happens during playback.")
Adding a new state or modifying an existing state's behavior requires changes to multiple
methods, violating the Open/Closed Principle.
The code isn't modular, and the behaviors for each state are intertwined, making it prone to
errors.
class PlayerState:
def play(self):
pass
def pause(self):
pass
def stop(self):
pass
def buffer(self):
pass
class PlayingState(PlayerState):
def play(self):
print("Video is already playing.")
def pause(self):
print("Pausing video.")
def buffer(self):
print("Buffering video. Please wait.")
class PausedState(PlayerState):
def play(self):
print("Resuming video.")
def pause(self):
243
print("Video is already paused.")
class StoppedState(PlayerState):
def play(self):
print("Starting video from the beginning.")
def stop(self):
print("Video is already stopped.")
class BufferingState(PlayerState):
def play(self):
print("Wait until buffering completes.")
def pause(self):
print("Wait until buffering completes.")
class VideoPlayer:
def __init__(self):
self.state = StoppedState()
def play(self):
self.state.play()
def pause(self):
self.state.pause()
def stop(self):
self.state.stop()
def buffer(self):
self.state.buffer()
The VideoPlayer class is cleaner and delegates state-specific behaviors to the state objects.
Adding a new state or modifying an existing one is easier and doesn't require changes to the
VideoPlayer class, adhering to the Open/Closed Principle.
In conclusion, the State Design Pattern offers a structured approach to handle objects that have
different behaviors based on their internal states. By encapsulating each state's behavior in
separate classes and delegating state-specific behaviors to these classes, the design becomes
more modular, maintainable, and adheres to good software design principles.
244
Let's see how exactly the Code With State Design Pattern
solved the above issues
📌 Issue 1: Cluttered Code with Conditional Statements: In the code without the State Design
Pattern, the VideoPlayer class was filled with conditional statements to check the current state
before deciding on the behavior. This made the code hard to read and maintain.
Solution with State Design Pattern: The State Design Pattern encapsulates each state's behavior
in its own class. This means that the behavior for each state is defined within that state's class,
eliminating the need for conditional checks in the main VideoPlayer class. The VideoPlayer
simply delegates the action to the current state object, which inherently knows its behavior. This
results in a cleaner and more organized code structure.
📌 Issue 2: Violation of the Open/Closed Principle: In the original code, introducing a new state
or modifying an existing state's behavior required changes to multiple methods within the
VideoPlayer class.
Solution with State Design Pattern: With the State Design Pattern, each state's behavior is
defined within its own class. If a new state needs to be introduced, a new class for that state is
created without modifying the existing classes. Similarly, if the behavior of an existing state needs
to be changed, only the specific state class needs to be modified. The VideoPlayer class remains
untouched, adhering to the Open/Closed Principle.
📌 Issue 3: Lack of Modularity: The original code intertwined the behaviors of all states, making
it prone to errors and harder to debug or extend.
Solution with State Design Pattern: The State Design Pattern promotes modularity by
separating the behavior of each state into its own class. This separation ensures that the logic for
each state is isolated from the others, reducing the risk of errors when modifying one state's
behavior. It also makes the system more maintainable, as developers can focus on individual state
classes without affecting the others.
📌 Under-the-hood Theory: The State Design Pattern leverages the power of polymorphism. By
having each state class implement a common interface (or inherit from a common base class), the
main context class ( VideoPlayer in our case) can interact with any state object interchangeably.
This dynamic dispatch capability, where the method that gets executed is determined at runtime
based on the object's class, is a cornerstone of object-oriented programming and is efficiently
handled by the Python interpreter.
In summary, the State Design Pattern addresses the issues of the original code by providing a
structured and modular approach. It encapsulates each state's behavior in separate classes,
promotes the Open/Closed Principle, and leverages polymorphism to delegate state-specific
behaviors, resulting in a more maintainable and less error-prone design.
class State:
def play(self):
pass
245
def pause(self):
pass
def stop(self):
pass
class PlayingState(State):
def play(self):
print("Already playing. No action taken.")
def pause(self):
print("Pausing music.")
def stop(self):
print("Stopping music.")
class PausedState(State):
def play(self):
print("Resuming music.")
def pause(self):
print("Already paused. No action taken.")
def stop(self):
print("Stopping music.")
class StoppedState(State):
def play(self):
print("Starting music from the beginning.")
def pause(self):
print("Can't pause. Music is already stopped.")
def stop(self):
print("Already stopped. No action taken.")
class MusicPlayer:
def __init__(self):
self.state = StoppedState()
def play(self):
self.state.play()
def pause(self):
self.state.pause()
def stop(self):
self.state.stop()
The MusicPlayer class has an attribute state that holds its current state. It delegates the
play , pause , and stop commands to the current state object.
The set_state method allows the MusicPlayer to change its current state.
📌 The beauty of this design is that if we want to add a new state or change the behavior of an
existing state, we can do so without modifying the MusicPlayer class. This adheres to the
Open/Closed Principle, which states that software entities should be open for extension but closed
for modification.
📌 Under-the-hood: When you call a method on the MusicPlayer object, it delegates the call to
the corresponding method of the current state object. This is a form of runtime polymorphism.
The actual method that gets executed depends on the type (class) of the current state object. This
dynamic dispatch is achieved through Python's dynamic typing and method overriding
capabilities.
📌 Context Class: The pattern requires a context class that maintains an instance of a state
subclass to define its current state. In our example, the MusicPlayer class serves as the context.
It has an attribute state that holds its current state, which is an instance of one of the state
subclasses.
📌 Delegation: The State Design Pattern dictates that the context class should delegate state-
specific requests to the current state object. In the MusicPlayer class, methods like play() ,
pause() , and stop() don't directly implement the behavior. Instead, they delegate these calls to
the corresponding methods of the current state object ( self.state.play() ,
self.state.pause() , and self.state.stop() ).
📌 State Transitions: While the provided code doesn't automatically transition between states, it
provides a mechanism to do so with the set_state method in the MusicPlayer class. This
method allows the context ( MusicPlayer ) to change its current state, which is a fundamental
aspect of the State Design Pattern.
📌 Flexibility and Open/Closed Principle: The State Design Pattern promotes flexibility. If we
need to introduce a new state or modify an existing one, we can do so without altering the context
class ( MusicPlayer ). This is in line with the Open/Closed Principle, which suggests that classes
should be open for extension but closed for modification. In our code, adding a new state would
involve creating a new state class without needing to modify the MusicPlayer class.
247
📌 Consistency in Interface: All state classes in the State Design Pattern should have a
consistent interface so that they can be interchangeably used by the context class. In our example,
all state classes ( PlayingState , PausedState , StoppedState ) inherit from the abstract State
class, ensuring they all have the methods play() , pause() , and stop() . This consistency allows
the MusicPlayer class to delegate calls without worrying about the method's availability in the
current state object.
In conclusion, the provided code example adheres to the principles and requirements of the State
Design Pattern by encapsulating each state's behavior in separate classes, maintaining a context
class that holds the current state, delegating state-specific behaviors to the current state object,
providing a mechanism for state transitions, ensuring flexibility, and maintaining a consistent
interface across all state classes.
class UserRole:
def view(self):
pass
def edit(self):
pass
def delete(self):
pass
class AdminRole(UserRole):
def view(self):
print("Admin: Viewing document.")
def edit(self):
print("Admin: Editing document.")
def delete(self):
print("Admin: Deleting document.")
class EditorRole(UserRole):
def view(self):
print("Editor: Viewing document.")
def edit(self):
print("Editor: Editing document.")
def delete(self):
print("Editor: Sorry, you cannot delete the document.")
class ViewerRole(UserRole):
def view(self):
print("Viewer: Viewing document.")
248
def edit(self):
print("Viewer: Sorry, you cannot edit the document.")
def delete(self):
print("Viewer: Sorry, you cannot delete the document.")
class DocumentEditor:
def __init__(self):
self.role = ViewerRole() # default role
def view_document(self):
self.role.view()
def edit_document(self):
self.role.edit()
def delete_document(self):
self.role.delete()
📌 Usage:
editor = DocumentEditor()
editor.set_role(EditorRole())
editor.edit_document() # Editor: Editing document.
editor.delete_document() # Editor: Sorry, you cannot delete the document.
editor.set_role(AdminRole())
editor.delete_document() # Admin: Deleting document.
📌 Description:
We have an abstract UserRole class that defines methods view , edit , and delete .
Three concrete role classes ( AdminRole , EditorRole , and ViewerRole ) inherit from
UserRole and provide their own implementations for the methods based on the
permissions associated with each role.
The DocumentEditor class (context) has an attribute role that holds its current user role. It
delegates the view_document , edit_document , and delete_document commands to the
current role object.
The set_role method allows the DocumentEditor to change its current role.
📌 Complexity:
249
This example is more complex than the previous one because it models a real-world scenario
where different user roles have varying permissions in a document editor. The State Design
Pattern elegantly handles the varying behaviors without cluttering the main DocumentEditor
class with conditional statements. Instead, each role's behavior is encapsulated in its respective
class, making the system more modular and maintainable.
📌 Encapsulation of States: In the provided code, each user role (state) is encapsulated within its
own class: AdminRole , EditorRole , and ViewerRole . This encapsulation ensures that the
behavior specific to each role is contained within its respective class, adhering to the principle of
the State Design Pattern that requires each state to be represented as a separate entity.
📌 Context Class: The DocumentEditor class serves as the context in this example. It maintains
an instance of a user role (state subclass) in its role attribute, which defines its current state. The
State Design Pattern mandates the presence of such a context class that holds a reference to one
of the state objects to define its current state.
📌 Delegation: The State Design Pattern emphasizes that the context class should delegate state-
specific requests to the associated state object. This principle is evident in the DocumentEditor
class. When methods like view_document() , edit_document() , and delete_document() are
invoked, the DocumentEditor doesn't directly execute the behavior. Instead, it delegates these
requests to the corresponding methods ( view() , edit() , and delete() ) of the current role
object ( self.role ).
📌 State Transitions: The set_role method in the DocumentEditor class facilitates transitions
between states. By invoking this method, the context ( DocumentEditor ) can change its current
role (state). This mechanism is central to the State Design Pattern, allowing objects to dynamically
alter their behavior by transitioning between states.
📌 Flexibility and Open/Closed Principle: The design of the provided code promotes
adaptability. If a new user role needs to be introduced or if the behavior of an existing role needs
modification, it can be achieved without altering the DocumentEditor class. This approach aligns
with the Open/Closed Principle, suggesting that entities should be open for extension but closed
for modification. For instance, if a new ContributorRole needs to be added, one would simply
create a new state class without modifying the existing DocumentEditor class.
📌 Consistency in Interface: For the State Design Pattern to function seamlessly, all state classes
should present a consistent interface. This ensures that they can be interchangeably used by the
context class. In the provided code, all role classes ( AdminRole , EditorRole , ViewerRole ) inherit
from the abstract UserRole class. This inheritance guarantees that they all possess the methods
view() , edit() , and delete() . Such consistency allows the DocumentEditor to delegate
method calls without concerns about the method's presence in the current role object.
In summary, the given code example adheres to the principles and requirements of the State
Design Pattern by encapsulating the behavior of each state within separate classes, maintaining a
context class that holds the current state, delegating state-specific behaviors to the state object,
offering a mechanism for state transitions, ensuring system flexibility, and preserving a consistent
interface across all state classes.
250
🐍🚀 Strategy Pattern in Python 🐍🚀
📌 Strategy Pattern: The Strategy Pattern is a behavioral design pattern that defines a family of
algorithms, encapsulates each one, and makes them interchangeable. It lets the algorithm vary
independently from the clients that use it. In simpler terms, it allows you to switch between
different methods or strategies at runtime without altering the code that uses these methods.
📌 Why use the Strategy Pattern?: 1. It promotes the Open/Closed Principle, which states that
classes should be open for extension but closed for modification. This means you can introduce
new strategies without changing the existing code. 2. It helps to avoid large conditional statements
or switch cases when deciding which algorithm to use. 3. It provides a clear separation between
the classes that use a strategy and the strategies themselves.
📌 Use Cases: 1. Sorting algorithms: Depending on the type and size of data, you might want to
switch between quicksort, mergesort, or bubblesort. 2. Payment methods: In an e-commerce
application, you might have multiple payment methods like credit card, PayPal, or bank transfer. 3.
Compression algorithms: Depending on the requirements, you might want to switch between
different compression methods like ZIP, RAR, or TAR.
Consider an e-commerce application that calculates shipping costs based on different shipping
methods. Here's a naive implementation:
class Order:
def __init__(self, total, shipping_method):
self.total = total
self.shipping_method = shipping_method
def calculate_shipping_cost(self):
if self.shipping_method == "standard":
251
return self.total * 0.05
elif self.shipping_method == "express":
return self.total * 0.10
elif self.shipping_method == "overnight":
return self.total * 0.20
else:
raise ValueError("Invalid shipping method")
📌 Violation of Open/Closed Principle: The code is not open for extension (adding new shipping
methods) without modifying the existing code.
📌 Lack of Reusability: The shipping calculation logic is embedded within the Order class and
cannot be reused elsewhere.
To implement the Strategy Pattern, we'll define a family of algorithms (shipping methods) and
encapsulate each one. Then, we'll make them interchangeable within the Order class.
class ExpressShipping(ShippingStrategy):
def calculate(self, total):
return total * 0.10
class OvernightShipping(ShippingStrategy):
def calculate(self, total):
return total * 0.20
def calculate_shipping_cost(self):
252
return self.shipping_strategy.calculate(self.total)
# Client code
order1 = Order(100, ExpressShipping())
print(order1.calculate_shipping_cost()) # Outputs: 10.0
📌 Easily Extensible: To add a new shipping method, simply create a new class that implements
the ShippingStrategy interface. No need to modify the existing Order class.
📌 Increased Reusability: The shipping calculation logic can now be reused in other parts of the
application if needed.
📌 Flexibility: The client can now easily switch between different shipping methods at runtime
without altering the code that uses these methods.
I hope this provides a clear understanding of the Strategy Pattern in Python and its benefits.
Let's delve deeper into how the refactored code using the
Strategy Pattern addresses the issues present in the original
code.
📌 Issue: Tight Coupling
In the original code, the Order class was responsible for both managing order details and
calculating shipping costs based on the shipping method. This means that the Order class was
tightly coupled with the shipping calculation logic.
Solution with Strategy Pattern: The refactored code decouples the Order class from the
shipping calculation logic by introducing a family of algorithms (shipping strategies) encapsulated
within their own classes ( StandardShipping , ExpressShipping , OvernightShipping ). The
Order class now only needs to interact with the ShippingStrategy interface, making it loosely
coupled. This separation ensures that changes to one part (e.g., adding a new shipping method)
don't necessitate changes to the other parts.
Solution with Strategy Pattern: The Strategy Pattern promotes the Open/Closed Principle. The
refactored code is open for extension (adding new shipping methods) without needing to modify
existing classes. If a new shipping method needs to be added, one can simply create a new class
implementing the ShippingStrategy interface. The existing Order class remains untouched,
253
ensuring that existing functionality is not jeopardized by new extensions.
Solution with Strategy Pattern: By encapsulating the shipping calculation logic within separate
strategy classes, the refactored code promotes reusability. Each shipping strategy
( StandardShipping , ExpressShipping , OvernightShipping ) can now be reused in other parts
of the application if needed. For instance, if there's a need to provide a shipping cost estimator
tool elsewhere in the application, these strategy classes can be leveraged without duplicating the
calculation logic.
Solution with Strategy Pattern: In the refactored code, the client can instantiate an Order
object with any shipping strategy without altering the code that uses these methods. This dynamic
behavior is a hallmark of the Strategy Pattern, allowing for easy interchangeability of algorithms or
methods at runtime.
In summary, the Strategy Pattern in the refactored code effectively addresses the issues of tight
coupling, violation of the Open/Closed Principle, and lack of reusability present in the original
code. It also introduces the added benefit of flexibility in switching between strategies.
@abstractmethod
def pay(self, amount: float) -> str:
pass
class PayPalPayment(PaymentStrategy):
def pay(self, amount: float) -> str:
return f"Paying ${amount} using PayPal."
class BankTransferPayment(PaymentStrategy):
def pay(self, amount: float) -> str:
254
return f"Paying ${amount} using Bank Transfer."
# Client code
cart = ShoppingCart(CreditCardPayment())
cart.checkout(100.0)
cart = ShoppingCart(PayPalPayment())
cart.checkout(100.0)
📌 Under-the-hood: When you use the Strategy Pattern, you're essentially leveraging
polymorphism. The context class ( ShoppingCart in our example) doesn't know the specifics of the
concrete strategy it's using. It only knows about the strategy interface. This decoupling is what
allows us to switch strategies on-the-fly. The actual method that gets called is determined at
runtime based on the object's type, a concept known as dynamic dispatch.
📌 Benefits: 1. The Strategy Pattern provides a clear separation of concerns. Each strategy is in its
own class, making it easy to add, remove, or modify strategies without affecting other parts of the
code. 2. It promotes code reusability. The same strategy can be used in different parts of the
application or even in different applications. 3. It simplifies unit testing. Each strategy can be
tested independently of the context class.
I hope this gives you a comprehensive understanding of the Strategy Pattern in Python!
📌 abstractmethod: This specific decorator, when applied to a method within a class, designates
that method as being abstract. An abstract method is a method that is declared but does not have
an implementation within the class it's declared in.
255
📌 Must be overridden: If a class has an abstract method, it means that any subclass (or derived
class) that is intended to be instantiated (i.e., you want to create objects of that subclass) must
provide an implementation for this abstract method. If it doesn't, Python will raise a TypeError
when you try to create an instance of that subclass.
📌 Non-abstract derived class: A derived class (or subclass) that provides implementations for
all the abstract methods of its base class is termed as non-abstract. If a derived class does not
provide implementations for all the abstract methods, it remains abstract, and you can't create
instances of it.
class AbstractClass(ABC):
@abstractmethod
def abstract_method(self):
pass
class DerivedClass(AbstractClass):
class AnotherDerivedClass(AbstractClass):
If you try:
obj = AnotherDerivedClass()
obj.abstract_method() # This will print: "Implemented abstract_method in
AnotherDerivedClass"
256
In essence, the abstractmethod decorator is a way to enforce a contract on subclasses. It ensures
that any non-abstract subclass provides concrete implementations for certain methods deemed
essential by the abstract base class.
@abstractmethod
def apply_filter(self, image: List[List[int]]) -> List[List[int]]:
pass
class SepiaFilter(ImageFilterStrategy):
def apply_filter(self, image: List[List[int]]) -> List[List[int]]:
# Simplified logic for applying sepia filter
for i in range(len(image)):
for j in range(len(image[i])):
r, g, b = image[i][j]
tr = int(0.393 * r + 0.769 * g + 0.189 * b)
tg = int(0.349 * r + 0.686 * g + 0.168 * b)
tb = int(0.272 * r + 0.534 * g + 0.131 * b)
image[i][j] = [tr, tg, tb]
return image
# Client code
image = [
[[255, 0, 0], [0, 255, 0], [0, 0, 255]],
257
[[128, 128, 128], [64, 64, 64], [32, 32, 32]]
]
processor = ImageProcessor(BlackAndWhiteFilter())
bw_image = processor.process(image)
print(bw_image)
processor = ImageProcessor(SepiaFilter())
sepia_image = processor.process(image)
print(sepia_image)
📌 Explanation of the Code: 1. We start with the ImageFilterStrategy interface that has an
abstract method apply_filter . This interface will be implemented by all concrete filter
strategies. 2. We then define two concrete strategies: BlackAndWhiteFilter and SepiaFilter .
Each implements the apply_filter method with its own logic. 3. The ImageProcessor class,
which acts as the context, uses an image filter strategy to process the image. It's unaware of the
specifics of the filter method; it just invokes the apply_filter method. 4. In the client code, we
can easily switch between image filters by changing the strategy passed to the ImageProcessor .
📌 Under-the-hood: The Strategy Pattern, in this context, allows for a dynamic selection of image
processing algorithms. The ImageProcessor class doesn't need to be aware of the specifics of
each filter. Instead, it relies on the strategy interface, which abstracts the details. This makes it
easy to introduce new filters or modify existing ones without changing the ImageProcessor class.
📌 Benefits: 1. Scalability: As the application grows, adding new filters becomes straightforward.
Just implement a new strategy and integrate it with the client code. 2. Maintenance: Each filter
logic is encapsulated in its own class, making it easier to pinpoint issues or make updates. 3.
Flexibility: Users can dynamically choose the filter they want to apply, providing a versatile user
experience.
This example showcases how the Strategy Pattern can be applied to a real-world scenario in image
processing, making the application more modular and extensible.
@abstractmethod
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
pass
class StockClearanceDiscount(PricingStrategy):
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
# Assuming a simplistic logic where certain products need clearance
if product in ["old_model_shoe", "last_season_dress"]:
return base_price * 0.7 # 30% discount
return base_price
class LoyalCustomerDiscount(PricingStrategy):
def calculate_price(self, base_price: float, product: str, user: str) ->
float:
# Assuming a simplistic logic where certain users are considered loyal
if user in ["user123", "user456"]:
return base_price * 0.85 # 15% discount
return base_price
def checkout(self, base_price: float, product: str, user: str) -> float:
return self._pricing_strategy.calculate_price(base_price, product, user)
# Client code
platform = ECommercePlatform(HolidayDiscount())
print(platform.checkout(100.0, "new_model_shoe", "user789"))
platform = ECommercePlatform(StockClearanceDiscount())
print(platform.checkout(100.0, "old_model_shoe", "user789"))
platform = ECommercePlatform(LoyalCustomerDiscount())
print(platform.checkout(100.0, "new_model_shoe", "user123"))
📌 Explanation of the Code: 1. We initiate with the PricingStrategy interface that has an
abstract method calculate_price . This interface will be implemented by all concrete pricing
strategies. 2. We then define three concrete strategies: HolidayDiscount ,
StockClearanceDiscount , and LoyalCustomerDiscount . Each implements the
calculate_price method based on its own criteria. 3. The ECommercePlatform class, acting as
the context, uses a pricing strategy to determine the final price. It's agnostic of the specifics of the
pricing method; it simply calls the calculate_price method. 4. In the client code, we can
effortlessly switch between pricing strategies by altering the strategy passed to the
ECommercePlatform .
📌 Under-the-hood: The Strategy Pattern here allows for a dynamic selection of pricing
algorithms. The ECommercePlatform class doesn't need to be aware of the specifics of each
pricing strategy. It relies on the strategy interface, which abstracts the details. This makes it easy to
introduce new pricing strategies or modify existing ones without changing the
ECommercePlatform class.
259
📌 Benefits: 1. Adaptability: As market conditions change, new pricing strategies can be added
without disrupting existing code. 2. Separation of Concerns: Each pricing logic is encapsulated in
its own class, ensuring that changes in one strategy don't affect others. 3. User Experience: By
offering dynamic pricing, users can benefit from various discounts, enhancing their shopping
experience.
This example illustrates how the Strategy Pattern can be applied to a real-world scenario in e-
commerce, making the platform more adaptable and user-friendly.
@abstractmethod
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
pass
class ScenicRoute(RouteStrategy):
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
# Simplified logic for a more scenic route
mid_point = ((start[0] + end[0]) // 2, (start[1] + end[1]) // 2)
return [start, mid_point, end]
class AvoidTrafficRoute(RouteStrategy):
def find_route(self, start: Tuple[int, int], end: Tuple[int, int]) ->
List[Tuple[int, int]]:
# Simplified logic to avoid traffic
detour = (start[0], end[1])
return [start, detour, end]
260
def __init__(self, route_strategy: RouteStrategy):
self._route_strategy = route_strategy
# Client code
nav_system = NavigationSystem(ShortestRoute())
print(nav_system.plan_route((0, 0), (10, 10)))
nav_system = NavigationSystem(ScenicRoute())
print(nav_system.plan_route((0, 0), (10, 10)))
nav_system = NavigationSystem(AvoidTrafficRoute())
print(nav_system.plan_route((0, 0), (10, 10)))
📌 Explanation of the Code: 1. We begin with the RouteStrategy interface that has an abstract
method find_route . This interface will be implemented by all concrete routing strategies. 2. We
then define three concrete strategies: ShortestRoute , ScenicRoute , and AvoidTrafficRoute .
Each implements the find_route method based on its own criteria. 3. The NavigationSystem
class, acting as the context, uses a routing strategy to determine the best route. It's unaware of
the specifics of the routing method; it simply calls the find_route method. 4. In the client code,
we can easily switch between routing strategies by changing the strategy passed to the
NavigationSystem .
📌 Under-the-hood: The Strategy Pattern here allows for a dynamic selection of routing
algorithms. The NavigationSystem class doesn't need to be aware of the specifics of each routing
strategy. It relies on the strategy interface, which abstracts the details. This makes it easy to
introduce new routing strategies or modify existing ones without changing the NavigationSystem
class.
📌 Benefits: 1. Flexibility: Users can choose the route type they prefer, whether it's the fastest,
the most scenic, or one that avoids traffic. 2. Maintainability: Each routing logic is encapsulated
in its own class, ensuring that changes in one strategy don't affect others. 3. Expandability: As
new routing criteria emerge (e.g., routes that avoid tolls or routes optimized for cycling), new
strategies can be added seamlessly.
This example showcases how the Strategy Pattern can be applied to a real-world scenario in
navigation systems, making the platform more flexible and user-centric.
261
🐍🚀 Template Design Pattern 🐍🚀
📌 The Template Design Pattern is a behavioral design pattern that defines the program
skeleton of an algorithm in a method, but delays some steps to subclasses, i.e. leave the details to
be implemented by the child classes.
It allows subclasses to override certain steps of an algorithm without changing the algorithm's
structure.
This behavioral design pattern is one of the easiest to understand and implement. This design
pattern is used popularly in framework development. This helps to avoid code duplication also.
AbstractClass contains the templateMethod() which should be made final so that it cannot be
overridden. This template method makes use of other operations available in order to run the
algorithm but is decoupled for the actual implementation of these methods. All operations used
by this template method are made abstract, so their implementation is deferred to subclasses.
ConcreteClass implements all the operations required by the templateMethod that were defined
as abstract in the parent class. There can be many different ConcreteClasses.
📌 Use Cases: 1. When you want to let clients extend only particular steps of an algorithm, but
not the whole algorithm or its structure. 2. When you have several classes that contain almost
identical algorithms with some minor differences. As a result, you might need to modify all classes
when the algorithm changes.
📌 The main idea behind this pattern is to define a method (often termed the "template method")
in an abstract base class. This method contains a series of method calls that every subclass will
execute in the same order, but the exact implementation of some of these methods is deferred to
the concrete subclasses.
Let's consider a scenario where we have a system that processes different types of documents.
Each document type has a similar processing flow: load the document, parse the document, and
save the document. However, the way each document type is parsed varies.
class XMLDocument:
def load(self, file_name):
print(f"Loading XML document from {file_name}")
262
def parse(self):
print("Parsing XML document")
def save(self):
print("Saving XML document")
class JSONDocument:
def load(self, file_name):
print(f"Loading JSON document from {file_name}")
def parse(self):
print("Parsing JSON document")
def save(self):
print("Saving JSON document")
📌 The above code has a lot of repetition. The load and save methods are almost identical for
both XMLDocument and JSONDocument .
📌 If we need to add another document type or change the processing flow, we'd have to modify
multiple classes.
class Document(ABC):
def load(self, file_name):
print(f"Loading {self.get_document_type()} document from {file_name}")
@abstractmethod
def parse(self):
pass
def save(self):
print(f"Saving {self.get_document_type()} document")
@abstractmethod
def get_document_type(self):
pass
class XMLDocument(Document):
def parse(self):
print("Parsing XML document")
def get_document_type(self):
return "XML"
263
class JSONDocument(Document):
def parse(self):
print("Parsing JSON document")
def get_document_type(self):
return "JSON"
📌 We've introduced an abstract class Document which acts as the template. It has the common
methods load and save , and an abstract method parse which will be implemented by concrete
subclasses.
📌 The process_document method in the Document class defines the sequence of steps to
process a document. This is the template method.
📌 Concrete classes like XMLDocument and JSONDocument provide the specific implementation for
the parse method.
📌 By using the Template Design Pattern, we've reduced code duplication and made it easier to
add new document types or modify the processing flow.
In conclusion, the Template Design Pattern provides a clear structure that promotes code reuse
and flexibility. It allows us to define a series of steps in an algorithm and let subclasses implement
specific parts of the algorithm without changing its structure.
Let's delve into the details of how the refactored code, which
implements the "Template Design Pattern", addresses the
issues present in the original code.
📌 Issue of Repetition: In the original code, the methods load and save were repeated for both
XMLDocument and JSONDocument . This repetition is not just about duplicating lines of code; it's
about duplicating logic. If we had to change the way documents are loaded or saved, we would
need to make changes in multiple places.
Solution with Template Design Pattern: The refactored code encapsulates the common logic
within the Document abstract class. The load and save methods are defined once in this class,
and all concrete document classes inherit these methods. This means there's a single place to
modify the loading or saving logic, ensuring consistency and reducing maintenance effort.
📌 Flexibility in Algorithm Structure: In the original code, if we wanted to introduce a new step
in the document processing flow or change the order of steps, we would have to modify each
concrete document class.
Solution with Template Design Pattern: The refactored code introduces the process_document
method in the Document class, which defines the sequence of steps (or the algorithm's structure).
If we need to introduce a new step or change the order, we only have to modify this method in
one place. This centralizes the control over the algorithm's structure.
📌 Ease of Extensibility: If we wanted to introduce a new document type in the original code, we
would have to define the entire processing flow for that document, leading to more repetition.
264
Solution with Template Design Pattern: With the refactored code, adding support for a new
document type is as simple as creating a new subclass of Document and providing an
implementation for the abstract methods. The common steps are already defined in the parent
class, so there's no need to redefine them.
Solution with Template Design Pattern: The refactored code decouples these concerns. The
Document class focuses on the high-level algorithm, defining the sequence of steps in
process_document . The concrete classes, like XMLDocument and JSONDocument , focus on the low-
level implementations, defining how specific steps like parse are carried out. This separation of
concerns makes the code more modular and easier to understand.
In essence, the Template Design Pattern in the refactored code promotes code reusability,
centralizes control over the algorithm's structure, enhances flexibility, and provides a clear
separation of concerns. All these benefits address the issues present in the original code, making
the system more maintainable and scalable.
📌 Real-life Use-Case: Let's consider a real-world scenario of a data processing system where raw
data needs to be loaded, processed, and then saved. The steps for loading and saving data might
be the same for different types of data, but the processing step might differ.
class DataProcessor(ABC):
def load_data(self):
# This method contains the generic way to load data
print("Loading data...")
@abstractmethod
def process_data(self):
pass
def save_data(self):
# This method contains the generic way to save data
print("Saving data...")
def execute(self):
# This is the template method
self.load_data()
self.process_data()
self.save_data()
class ImageProcessor(DataProcessor):
def process_data(self):
print("Processing image data...")
class TextProcessor(DataProcessor):
265
def process_data(self):
print("Processing text data...")
# Client code
image_processor = ImageProcessor()
image_processor.execute()
text_processor = TextProcessor()
text_processor.execute()
The methods load_data() and save_data() have a default implementation, but the
process_data() method is abstract, meaning that every concrete subclass must provide its
own implementation for this method.
We then have two concrete subclasses: ImageProcessor and TextProcessor . Each of these
provides its own implementation of the process_data() method.
In the client code, we create instances of ImageProcessor and TextProcessor and call their
execute() methods. This demonstrates how the template method ensures the steps are
executed in the same order, but the processing step can vary based on the concrete class.
📌 The beauty of the Template Design Pattern is that it provides a clear separation between the
generic algorithm and the specific steps that can be customized by subclasses. This promotes
code reuse and flexibility.
def execute(self):
self.load_data()
self.process_data()
self.save_data()
In the above code, the execute() method is the template method that dictates the order of
operations. It ensures that data is first loaded, then processed, and finally saved.
266
@abstractmethod
def process_data(self):
pass
In the provided code, both ImageProcessor and TextProcessor subclasses provide their own
implementations of the process_data() method:
def process_data(self):
print("Processing image data...")
and
def process_data(self):
print("Processing text data...")
This showcases the principle of allowing subclasses to redefine certain steps of the algorithm
without altering its overall structure.
def load_data(self):
print("Loading data...")
and
def save_data(self):
print("Saving data...")
In summary, the provided code example adheres to the principles of the Template Design Pattern
by defining a clear algorithm structure in the base class, allowing subclasses to customize specific
steps, and ensuring that the overall sequence of operations remains consistent across all
subclasses.
267
from abc import ABC, abstractmethod
class ComputerBuilder(ABC):
def select_components(self):
# Generic method to select components
print("Selecting basic components...")
@abstractmethod
def assemble_components(self):
pass
def install_software(self):
# Generic method to install software
print("Installing basic software...")
@abstractmethod
def perform_tests(self):
pass
def build(self):
# This is the template method
self.select_components()
self.assemble_components()
self.install_software()
self.perform_tests()
class GamingComputerBuilder(ComputerBuilder):
def assemble_components(self):
print("Assembling components optimized for gaming...")
def perform_tests(self):
print("Performing high-end graphics and performance tests...")
class ServerComputerBuilder(ComputerBuilder):
def assemble_components(self):
print("Assembling components optimized for server operations...")
def perform_tests(self):
print("Performing load and stress tests...")
# Client code
gaming_computer = GamingComputerBuilder()
gaming_computer.build()
server_computer = ServerComputerBuilder()
server_computer.build()
268
The ComputerBuilder class is the abstract base class that defines the skeleton of the
computer-building process. The build() method is the template method that dictates the
order of operations: selecting components, assembling them, installing software, and
performing tests.
Let's see how the above code example adheres to the principles and requirements of the
Template Design Pattern. in Python
📌 Algorithm Skeleton in Base Class: The ComputerBuilder class serves as the base class that
outlines the skeleton of the computer-building algorithm. This is evident in the build() method,
which lays out the sequence of steps to be followed.
def build(self):
self.select_components()
self.assemble_components()
self.install_software()
self.perform_tests()
In this code segment, the build() method (our template method) dictates the order of
operations. It ensures that components are first selected, then assembled, followed by software
installation, and finally, tests are performed.
269
@abstractmethod
def assemble_components(self):
pass
@abstractmethod
def perform_tests(self):
pass
def assemble_components(self):
print("Assembling components optimized for gaming...")
and
def assemble_components(self):
print("Assembling components optimized for server operations...")
This exemplifies the principle of enabling subclasses to redefine specific steps of the algorithm
without altering its overarching structure.
def select_components(self):
print("Selecting basic components...")
and
def install_software(self):
print("Installing basic software...")
In conclusion, the provided code example adheres to the principles of the Template Design
Pattern by establishing a clear algorithm structure in the base class, allowing subclasses to
customize specific steps, and ensuring that the overall sequence of operations remains consistent
across all subclasses.
270
Why did I use the @abstractmethod in above examples
📌 The @abstractmethod is a decorator provided by the abc (Abstract Base Class) module in
Python. It's used to declare that a method is abstract, which means:
1. The method does not have a concrete implementation in the base class.
2. Any concrete (non-abstract) subclass must provide an implementation for this method.
By marking a method as abstract, we're signaling to the developer that this particular step of
the algorithm is intended to be overridden by subclasses to provide specific behavior.
📌 In essence, the @abstractmethod serves a dual purpose: 1. It provides a clear contract for
developers, indicating which methods they must implement in concrete subclasses. 2. It ensures
the integrity of the Template Design Pattern by mandating that the customizable parts of the
algorithm are indeed customized by the subclasses.
class OnlineOrder(ABC):
def calculate_total(self):
self.total = sum(self.items.values())
print(f"Total amount: ${self.total}")
def apply_discount(self):
271
# For simplicity, let's apply a generic 10% discount
self.total *= 0.9
print(f"Discount applied. New total: ${self.total}")
@abstractmethod
def process_payment(self):
pass
class CreditCardOrder(OnlineOrder):
def process_payment(self):
print("Processing payment through credit card...")
class PayPalOrder(OnlineOrder):
def process_payment(self):
print("Processing payment through PayPal...")
class CryptoOrder(OnlineOrder):
def process_payment(self):
print("Processing payment through cryptocurrency...")
# Client code
items = {'book': 20, 'pen': 2, 'laptop': 1000}
credit_card_order = CreditCardOrder()
credit_card_order.checkout(items)
paypal_order = PayPalOrder()
paypal_order.checkout(items)
crypto_order = CryptoOrder()
crypto_order.checkout(items)
The process_payment() method is abstract, meaning that concrete subclasses must provide
their own implementations. This method represents the step that can vary based on the
payment method chosen.
272
The CreditCardOrder , PayPalOrder , and CryptoOrder subclasses each provide their
specific implementations of the process_payment() method, tailored to their respective
payment methods.
In the client code, we create instances of the different order types and call their checkout()
methods. This demonstrates how the template method ensures the steps are executed in the
same order, but the payment processing step can vary based on the chosen payment
method.
The Repository Pattern is a design pattern used in software development to abstract away the
complexities of accessing data from different data sources, be it databases, APIs, or even in-
memory storage. It provides a clean interface for the rest of the application to access data without
concerning itself with the underlying data access mechanism.
By doing so:
1. Decoupling: The main application logic remains decoupled from the underlying data source.
This means that if we change from one type of database to another, or from a database to an
API, the main application code doesn't need significant changes.
2. Testing: It facilitates easier testing because mock repositories can be used in place of real
ones. This way, unit tests won't have any external dependencies.
3. Consistency: The Repository Pattern enforces consistent access patterns which can lead to
improved maintainability and predictability.
4. Abstraction: The actual operations, like CRUD (Create, Read, Update, Delete) operations, are
abstracted behind a consistent interface. This means that the main application logic doesn't
need to know about SQL queries, API calls, or other data retrieval methods.
# Repository interface
class BookRepository(ABC):
@abstractmethod
def add(self, book: Book):
pass
@abstractmethod
def get(self, id: int) -> Union[Book, None]:
pass
@abstractmethod
def list(self) -> List[Book]:
pass
@abstractmethod
def update(self, book: Book):
pass
@abstractmethod
def delete(self, id: int):
pass
def __init__(self):
# This is a pseudo-database for demonstration purposes.
self._db = {}
274
if book.id in self._db:
self._db[book.id] = book
Here, the BookService only interacts with the abstract BookRepository . Even if we change the
data source from SQL to NoSQL or an API in the future, we just need a new repository
implementation. The service layer remains unchanged. This is a simplified example; in production
grade code there would be more intricate handling, error management, and optimizations.
This BookRepository class is an abstract base class (ABC) that provides a contract for what
methods a book repository should have. By setting these methods as abstractmethod s, we
ensure that any concrete implementation of the repository will provide these methods.
class BookService:
The BookService is initialized with an object that adheres to the BookRepository contract (i.e.,
has the methods defined by the abstract base class). This way, the service layer is decoupled from
the data access logic. It doesn't matter if the underlying implementation is SQL, NoSQL, or another
data source. As long as the data source adheres to the contract, the service layer remains
unchanged. This adheres to the Dependency Inversion Principle, a fundamental SOLID principle.
275
class SQLBookRepository(BookRepository):
...
add(book: Book)
update(book: Book)
delete(id: int)
This ensures that data is accessed and manipulated in a consistent manner, regardless of the
underlying data source.
The main application logic remains untouched, which reduces bugs, makes the codebase more
maintainable, and speeds up development when migrating or extending data sources.
In summary, the provided code embodies the principles of the Repository Pattern by abstracting
data operations, decoupling application logic from data access logic, ensuring consistent data
access patterns, and providing flexibility for future changes and testing.
276
Let's see an example WITH and then WITHOUT the
"Repository Pattern in Python`
📌 Code Example Without Using Repository Pattern:
Imagine we're building a book management system that directly interacts with a database.
class Book:
def __init__(self, id: int, title: str, author: str):
self.id = id
self.title = title
self.author = author
class BookDB:
def __init__(self):
self._db = {}
bookDB = BookDB()
book = Book(1, "Harry Potter", "J.K. Rowling")
bookDB.add_book(book)
2. Hard to Test: Since data access logic is directly embedded, unit testing becomes complex.
We'd have to mock the data source every time we want to test the application logic.
3. Not Scalable: If we want to introduce new data sources or services, we would have to
expand the BookDB class, leading to bloated classes that violate the Single Responsibility
Principle.
4. Not Abstracted: Business logic and data access logic are mixed together. A better approach
would keep them separate.
277
from abc import ABC, abstractmethod
class Book:
def __init__(self, id: int, title: str, author: str):
self.id = id
self.title = title
self.author = author
# Repository interface
class IBookRepository(ABC):
@abstractmethod
def add(self, book: Book):
pass
@abstractmethod
def get(self, id: int) -> Book:
pass
@abstractmethod
def list(self) -> list:
pass
@abstractmethod
def update(self, book: Book):
pass
@abstractmethod
def delete(self, id: int):
pass
def __init__(self):
self._db = {}
bookRepo = InMemoryBookRepository()
bookService = BookService(bookRepo)
new_book = bookService.add_book("Harry Potter", "J.K. Rowling")
2. Flexibility: We can easily replace or add new data sources by implementing the
IBookRepository interface.
3. Testability: Testing becomes easier as we can mock the repository interface to test the
service layer.
4. Maintainability: If we need to change the data source or add functionalities, the codebase
becomes easier to manage due to its modular structure.
Original Code: - The BookDB class was directly responsible for both handling the data (via a
pseudo-database) and the operations associated with the books.
Refactored Code: - We introduced an interface IBookRepository , which only defines the contract
for data operations. - The InMemoryBookRepository class implements this interface and is
responsible purely for data operations. - The BookService class becomes the primary interface
for consumers and handles the business logic, relying on the repository for data access.
This separation means changes to the data layer don't necessarily impact the business logic and
vice versa, ensuring modularity and readability.
Refactored Code: - With the introduction of the IBookRepository interface, we can now have
multiple implementations of data sources (e.g., SQL, NoSQL, APIs, etc.) without changing our
service layer. - This allows for flexibility in swapping or integrating new data sources.
📌 Improved Testability:
279
Testing was complex in the original code, as it required mocking the data source every time.
Refactored Code: - The separation of concerns allows us to mock the IBookRepository interface
and test the BookService class in isolation. - This ensures that our unit tests are fast, consistent,
and not reliant on the state of an external database.
📌 Enhanced Scalability:
In the original code, introducing new data sources or expanding functionalities would lead to
bloating of the BookDB class.
Refactored Code: - Due to the modular structure, adding new functionalities or data sources
becomes more manageable. - If a new data source needs to be added, we only need to create a
new class that implements the IBookRepository interface, without touching the existing code.
This design ensures that each class changes only for one reason, thereby adhering to the Single
Responsibility Principle, a fundamental SOLID principle.
📌 Enhanced Maintainability:
Original Code: - Combining business logic with data access logic can make future changes
challenging, especially as the system grows.
Refactored Code: - Due to the separation of concerns and a modular approach, the maintainability
of the codebase improves. - Developers can quickly identify where changes are needed and can be
confident that changes in one section won't inadvertently affect another.
In summary, the refactored code using the "Repository Pattern in Python" addresses the issues of
tight coupling, testability, scalability, and maintainability present in the original code. The pattern
provides a clean, modular, and flexible architecture that's beneficial for large-scale, production-
grade applications.
class User:
def __init__(self, id: int, name: str, role: str):
self.id = id
self.name = name
self.role = role
class UserDB:
def __init__(self):
self._db = {}
280
def add_user(self, user: User):
self._db[user.id] = user
userDB = UserDB()
user = User(1, "Alice", "Engineer")
userDB.add_user(user)
4. Business and Data Logic Conflation: There's a lack of clear division between these logic
types.
class User:
def __init__(self, id: int, name: str, role: str):
self.id = id
self.name = name
self.role = role
# Repository interface
class IUserRepository(ABC):
@abstractmethod
def add(self, user: User):
pass
@abstractmethod
def get(self, id: int) -> User:
pass
@abstractmethod
def list(self) -> list:
281
pass
@abstractmethod
def update(self, user: User):
pass
@abstractmethod
def delete(self, id: int):
pass
def __init__(self):
self._db = {}
userRepo = InMemoryUserRepository()
userService = UserService(userRepo)
new_user = userService.register_user("Bob", "Manager")
2. Data Source Independence: With IUserRepository , we can effortlessly introduce new data
implementations without disturbing the business layer.
282
3. Ease of Testing: Mocking IUserRepository allows isolated testing of UserService .
4. Modularity: The structured code promotes more effortless maintainability and scalability.
5. Clear Role Definition: Each class and interface has a well-defined purpose, promoting clarity
and reducing confusion.
With the Repository Pattern, the code is cleaner, more maintainable, and extensible, aligning with
best practices found in mature, production-grade systems.
Refactored Code: - The repository pattern introduces a clear separation between the business
operations and the data storage mechanism. - UserService manages the operations like user
registration. - The InMemoryUserRepository class deals exclusively with the in-memory storage
operations.
Benefits: - This separation allows developers to modify business logic without affecting data
operations and vice versa. It promotes a modular approach, making the code easier to maintain
and understand.
Refactored Code: - The introduction of the IUserRepository interface abstracts the data storage
details. Any new storage mechanism, whether SQL, NoSQL, or an API, can be incorporated by
implementing this interface.
Benefits: - This abstraction provides flexibility, ensuring that the system can seamlessly adapt to
changes or expansions in data storage mechanisms without altering the business logic.
📌 Facilitated Testing:
Original Code: - The original design made unit testing challenging due to its integrated data access
logic.
Refactored Code: - With the IUserRepository interface, we can mock the data operations,
enabling the testing of the UserService class in isolation.
Benefits: - Tests become more focused, faster, and less prone to external disruptions, leading to a
more reliable codebase.
283
Refactored Code: - The repository pattern ensures each component has a specific role. If a new
data source or functionality needs to be added, the modular approach ensures minimal disruption
to existing code.
Benefits: - Scalability becomes more straightforward, and the risk of introducing errors during
expansion is minimized. The system becomes more maintainable due to its structured and
modular nature.
Refactored Code: - Each class now has a clear responsibility. The InMemoryUserRepository
handles data operations, and UserService handles business operations.
Benefits: - This adherence to the Single Responsibility Principle, a core tenet of SOLID principles,
ensures that the system remains robust. Each component has a clear purpose, reducing the
likelihood of unintended side effects when making changes.
In summary, the refactored code employing the "Repository Pattern in Python" addresses the
original issues of tight coupling, testability, scalability, and mixed responsibilities. Adopting this
pattern leads to a more modular, maintainable, and resilient architecture suitable for large-scale,
professional applications.
284