[go: up one dir, main page]

0% found this document useful (0 votes)
17 views164 pages

My Question Bank

Uploaded by

Nitesh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views164 pages

My Question Bank

Uploaded by

Nitesh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

What is Sharding?

1. Technique to implement Horizontal Partitioning.


2. The fundamental idea of Sharding is the idea that instead of having all the
data sit on one DB instance, we split it up and introduce a Routing layer so
that we can forward the request to the right instances that actually contain
the data.
Pros.
1. Scalability
2. Availability
What is API Whitelisting ?
API Whitelisting is a security measure where only specified clients, IP addresses, or
applications are allowed to access an API. This ensures that unauthorized or
untrusted entities cannot interact with the API, reducing the risk of malicious
activities.

# HTTP Methods
=> Every REST API method should be mapped to HTTP Method.
GET --> To get resource/data from server
POST --> To insert/create record at server
PUT --> To update data at server
DELETE --> To delete data at server

# HTTP Status Codes


-> When client send request to server then server will process that request and
server will send response to client with status code.
100 - 199 (1xx) ---> Information
200 - 299 (2xx) ---> Success (OK)
300 - 399 (3xx) ---> Redirection
400 - 499 (4xx) ---> Client Error
500 - 599 (5xx) ---> Server Error

200 OK
201 Created
400 Bad Request

1
401 Unauthorized
403 Forbidden
404 Not Found
405 Method Not Allowed
408 Request Timeout
413 Payload Too Large
429 Too Many Requests
500 Internal Server Error
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
508 Loop Detected

NOTE:
1. If we create an object for parent class then we can access only the
members of Parent class.

2. but if we create an object for child class then we can access the members
of Both Parent class and Child class.
NOTE:
-> When we declare a class and we are not extending from any class then
compiler will create the class by extending from Object class automatically.

-> but we declare a class by extending from any one class then compiler
won’t extend from Object class, hence we call Object class as java's super
most class

2
1. Why Java is Platform-Independent?
Java is platform-independent because it is compiled to a bytecode that can be run
on any device that has a Java Virtual Machine (JVM). So we can write a Java
program on one platform (such as Windows) and then run it on a different
platform (such as macOS or Linux) without making any changes to the code.
2. Why Java is Architecture-Independent?
The JVM is the key component that enables architecture independence in Java. It
acts as an intermediary between the bytecode and the machine's architecture. The
JVM is platform-specific, but it provides a consistent execution environment
regardless of the underlying hardware or operating system.

Java's architecture independence is achieved through the JVM, which translates


platform-neutral bytecode into machine code that is specific to the underlying
hardware. This architecture allows Java programs to run on any machine,
regardless of its architecture (e.g., x86, ARM, etc.), without requiring modification.

3. Why Java is not 100% Object Oriented.


Java is not 100% object-oriented because of the existence of primitive data types
and the static keyword. Primitive data types (e.g., int, char, etc.) are not objects,
and the static keyword allows methods and variables to belong to the class rather
than an instance of the class, enabling their use without creating an object of the
3
class.
4. main() method syntax:-
->public static void main(String[] args)= Correct
->public static void main(Integer[] args)=Correct syntax for method but incorrect
syntax for main() method.
->static public void main(String[] args)= Correct because we can change the
sequence of static and public.
->public void static main(String[] args)=Incorrect & Compile time error because
method syntax is incorrect as java needs always method name after return type.
->public static void args(String[] args) OR public static void xyz(String[] args) =
Correct syntax for method but incorrect syntax for main() method.
->public static void main(String[ ] args)= Correct
->public static void main(String [] args)=Correct
->public static void main(String []args)=Correct
->public static void main(String args() {})=Incorrect
->public static void main(String args)=Correct syntax for method but incorrect
syntax for main() method.
->public static void main(String[] xyz)=Correct
->public static void main(String... xyz)=Correct(But there should be always 3 dots
only)
->protected/private static void main(String[] args)=Correct syntax for method but
incorrect syntax for main() method.
->final public static void main(String[] args)= Correct
->synchronized public static void main(String[] args)= Correct
->strictfp public static void main(String[] args)= Correct
->public void main(String[] args)= Correct syntax for method but incorrect syntax
for main() method.{Error: Main method is not static in class test}
->static void main(String[] args)=Correct syntax for method but incorrect syntax
for main() method.

5.Difference between Access specifiers and Access modifiers ?


⦁ Access Specifiers: Control the visibility and scope of classes, methods, and
variables. Examples include public, protected, default (no keyword), and
4
private.

⦁ Access Modifiers: Provide additional characteristics and behavior to classes,


methods, and variables. Examples include static, final, abstract,
synchronized, volatile, and transient.
6. State the significance of public, private, protected, default modifiers
both singly and in combination and state the effect of package
relationships on declared items qualified by these modifiers.
public: Public class is visible in other packages, field is visible everywhere (class
must be public too)

private : Private variables or methods may be used only by an instance of the same
class that declares the variable or method, A private feature may only be accessed
by the class that owns the feature.

protected : Is available to all classes in the same package and also available to all
subclasses of the class that owns the protected feature. This access is provided
even to subclasses that reside in a different package from the class that owns the
protected feature.

What you get by default ie, without any access modifier (ie, public private or
protected). It means that it is visible to all within a particular package.

7.Define Widening, Narrowing, Up casting, Down casting, Boxing,


Unboxing.
# Type casting wrt primitive data types

⦁ Widening:- Widening means converting the value from lower data type into
higher data type.
⦁ Narrowing:- narrowing means converting the higher data type value into
smaller data type.
# Type casting w.r.t reference types

⦁ Up casting:- Up casting means storing the child class object into the parent
class reference.

5
⦁ Down casting:- Down casting means storing the Parent class object into the
child class reference.
# Boxing: converting a primitive value into object is called boxing. from java 1.5
onwards this procedure is done automatically by compiler hence it is called auto
boxing.
# Unboxing: converting an object value into primitive type is called un boxing.
from java 1.5 onwards this procedure is automatically done by compiler hence it is
called auto un boxing

8. Final, Super and this Keyword in Java.


Final keyword :-
⦁ final is a keyword or modifier which can be used at variables, methods &
classes.
⦁ If we declare a variable as final then we can’t modify value of the variable.
The variable acts like a constant. Final field must be initialized when it is
declared.
⦁ If we declare a method as final then we can't override that method
⦁ If we declare a class as final then we can't extend from that class. We cannot
inherit final class in Java.
Super keyword :-
⦁ Access Parent Class Members :- The super keyword allows a subclass to call
methods or access variables of its parent class.
⦁ Call Parent Class Constructor :- The super keyword is used to explicitly invoke
the constructor of the parent class. but it must be the 1st statement in the
constructor of a child class.
⦁ It can not be used inside static method.
This keyword :-
It is a special reference variable that is automatically created by the
compiler. It stores the object address and points to the current object
running in the program. this() is used to call the Constructor but it must
be 1st statement in the constructor and we can access non-static
variables and method too. it can't be used inside static context.

6
9. Blocks in java:-
-> Block means some part or some piece of information or some piece of code
-> In java program we can write 2 types of blocks
1) instance block
2) static block
1. Instance Block:-
-> If you want to execute some piece of code when object is created then we can
go for instance block
-> Instance block will be executed before constructor execution
syntax:
{
// stmts
}
2. static Block:-
-> If you want to execute some piece of code when class is loaded into JVM then
we can go for static block
-> static block will execute before main ( ) method execution
syntax:
static
{
// stmts
}
10. What is static control flow & instance control flow in java program ?
1. Static Control Flow:-
-> When class is loaded into JVM then static control flow will start
-> When we run java program, JVM will check for below static members & JVM will
allocate memory for them in below order
a) static variables
b) static methods
c) static blocks
-> Once memory allocation completed for static members then it will start
execution in below order

7
a) static block
b)static method(if we call)only main method will execute automticaly by jvm
c) static variable
-> static variables can be accessed directly in static blocks and static methods.
Note: If we want to access any instance method or instance variable in static area
then we should create object and using that object only we can access. We can't
access directley without object.
2. Instance Control Flow:-
-> instance means Object
-> Instance control flow will begin when object is created for a class
-> When Object is created then memory will be allocated for
a) instance variables
b) instance methods
c) instance blocks
-> Once memory allocation completed then execution will happen in below order
a) instance block
b) constructor
c) instance methods (if we call)
Note: static members can be access directly in instance areas because for static
members memory already allocated at the time of class loading.

11. Define Is-A and Has-A relationship


-> We have below two types of relationship
1) Is-A relationship
2) Has-A relationship

8
# Is-A relationship:- if class is extending from another class then it is called as is-
A relationship
-> Here we can access class1 information inside the class2 directly without creating
as Object for class1
# Has-A relationship:- If a class contains other class Object, then it is called as
Has-A relation.
-> Here we can access class1 information inside the class2 only by using object of
class1.
12. SOLID OOPs Principle
=> The main aim of above SOLID OOPs Principles to make our code more readable,
maintainable and loosely coupled.
S-> Single Responsibility
=> A class should have only one responsibility.
=>if we have to write the code for two works 1. Generate Excel Report 2.Generate
Pdf Report then we need to make two seperate class for Pdf and Excel Report
Generation.
=> Single Responsibility Principle can solve using abstract design pattern.
O-> Open Close Principle
=>Our code should be open for extension and closed for modification
=>If we want to add any other load details then we don't need to Modify the class
logic. instead we have to override the method in new class with extend keyword
and re-write the login according to new load details.
=> Open Close Principle can solve using abstract Factory design pattern and
Strategy design pattern.

9
L-> Liskov's Substitution Principle
=> LSP says that subtypes must be Substitutable for base type.
=> Object of a child class should be as it is substutable in variable of parent class.
=> No change should required in the codebase to accomodate a .............. child
class or you can say child class should not need special treatment.
=> Child class should do exact what the parent class expects.
Note: Inheritance might not be the best way always for Reusability.
Note: Do inheritance if and only if there is a strict is-A relationship.

Example :- Lets talk about Credit Card Payment


Now this time we have three types of Credit Card like MasterCard, VisaCard,
RuPayCard. To accept CreditCard Payment through all these CreditCard , We make
a abstract class CreditCardPayment with abstract method like TapAndPay,
SwipeAndPay, OnlineTranfer. Then we inherit CreditCardPayment by all three
CreditCard Class like MasterCardPayment extends CreditCardPayment and
override all the method and write the Logic according to type of credit Crad.
But through Visa and Master Card can do International Payment but not with
RuPay Card and Through RuPay Card we can do UPI payment But Not with the
Master and Visa card.
=> So here CreditCardPayment has not strict Is-A relational ship with Rupay,Visa
and Master.
=> This Problem can solve with Logical check but this is not good way.
=> So for solving this we have to make interface for InternationalPayment and
RuPayUpiPayment then implement RuPayUpipayment by RuPayCredit and
InternationalPayment by VisaCard and MasterCard. like MasterCardPayment
extends CreditCardPayment Implements InternationalPayment and
RuPayCardPayment extends CreditCardPayment Implements RuPayUpiPayment.
This way we have achieve LSP.
Note: Previously in Main Class we are Making the Object of CreditCardPayment
type , Now we can make the Object of RuPayUpiPayment and
InternationalPayment.
Daily Use: Payment gateways follow LSP to allow seamless switching between
different payment methods (credit cards, PayPal, bank transfers) without altering
10
the core logic.
I-> Interface Seggregation
=>Don't force the developers to implement uncessary methods of interface
=>We need to break larger interfaces into smaller interfaces
=> Never create a big interface with lot of methods.
Ex:
=> We don't Create a interface with Two half method 1.Excel Report Service 2. Pdf
Report Service. We have to create two interfaces for two methods.

D-> Dependency Inversion Principle


=> Our classes should not to talk to implementation classes directly. Always code
to interfaces and inject dependencies using setter/constructor to achieve loosely
coupling.

=> Spring DI is the best example of dependency inversion.

1. High-level modules (your business logic or important classes) should not depend
on low-level modules (helper classes or concrete implementations).
⦁ Both should depend on abstractions (e.g., interfaces or abstract classes).
2. Abstractions should not depend on details (implementations).
⦁ Details (concrete implementations) should depend on abstractions.
How to Achieve DIP?
⦁ Use interfaces or abstract classes to define contracts.
⦁ Inject dependencies (implementations) into classes via constructor injection,
setter injection, or dependency injection frameworks.

13. Object:-
> Any real-world entity is called as Object
-> Objects exist physically
-> Objects will be created based on the Classes
-> Without having the class, we can't create object (class is mandatory to create
objects)
-> Object creation means allocating memory in JVM

11
-> 'new' keyword is used to create the objects
-> Objects will be created by JVM in the runtime
-> Objects will be created in heap area.
-> If object is not using then garbage Collector will remove that object from heap
-> Garbage Collector is responsible for memory clean-up activities in JVM heap
area.
-> Garbage Collector will remove un-used objects from heap.
-> Garbage Collector will be managed & controlled by JVM only.
Note: Programmer don't have control on Garbage Collector

14. What is Object Class?


The Object class in Java is the root class from which all other classes implicitly
inherit. It is the top-most class in the Java class hierarchy, meaning every class in
Java is a descendant, directly or indirectly, of the Object class. This means that
every class you create in Java inherits the methods of the Object class.

15. In how many ways we can create Object for a class ?


1) using new operator
2) using newInstance ( ) method
3) using clone ( ) method
16. Methods of Object Class?
1.protected Object clone()
⦁ Creates and returns a copy of this object.

2.boolean equals(Object obj)


⦁ Indicates whether some other object is "equal to" this one.

3.protected void finalize()


⦁ Called by the garbage collector on an object when garbage collection
determines that there are no more references to the object.

4.Class<?> getClass()
⦁ Returns the runtime class of this Object.

5.int hashCode()

12
⦁ Returns a hash code value for the object.

6.void notify()
⦁ Wakes up a single thread that is waiting on this object's monitor.

7.void notifyAll()
⦁ Wakes up all threads that are waiting on this object's monitor.

8.String toString()
⦁ Returns a string representation of the object.

9.void wait()
⦁ Causes the current thread to wait until another thread invokes the notify()
method or the notifyAll() method for this object.

17. Why do we use an Interface?


⦁ It is used to achieve total abstraction.
⦁ Since java does not support multiple inheritances in the case of class, by
using an interface it can achieve multiple inheritances.
⦁ It is also used to achieve loose coupling.

18. Rules for Defining Interface


Interface Rules
⦁ For an interface we cannot create any object directly but we can create
reference variable
⦁ Once an interface is implemented by any class then that class must provide
the implementation for all the abstract methods available in the particular
interface. This class is also called as implementation class or child class.
⦁ For example, if our class is not providing implementation for at least 1
method then our class must be declared as abstract
⦁ We cannot create an object for abstract class or interface but we can create
an object only for implementation class.
⦁ Once an interface is created then any number of classes can implement that
interface
NOTE:
⦁ All the variables declared in interface are public static final by default
whether we specify or not.
13
⦁ All the methods declared in interface are public abstract by default whether
we specify or not.
⦁ public: all the variables and methods declared in interface are public so that
they can be accessible from any where.
⦁ static: variables declared in interface are by default static so that they can be
accessible directly by using the interface name.
⦁ final: the variables declared in interface are by default final it means they are
constant whose value cannot be changed.
⦁ abstract: all the methods declared in interface are abstract because they
don’t contain any method body
19. When to use Abstract Methods & Abstract Class?
Abstract methods are usually declared where two or more subclasses are
expected to do a similar thing in different ways through different
implementations. These subclasses extend the same Abstract class and provide
different implementations for the abstract methods.

Abstract classes are used to define generic types of behaviours at the top of an
object-oriented programming class hierarchy, and use its subclasses to provide
implementation details of the abstract class. When We need of Constructor then
We have to go for Abstract class instead of interface.

20. What is the differnece between final, finalize( ) and finally ?


final : it is a keyword which is used to declare final variables, final methods and
final classes
finalize ( ) : It is predefined method available in Object class, and it will be called by
garbage collector before removing unused objects from heap area.
finally : it is a block we will use to execute some clean acitivites in exception
handling.
21. What is the difference between preemptive scheduling and time
slicing?
Under preemptive scheduling, the highest priority task executes until it enters the
waiting or dead states or a higher priority task comes into existence.

Under time slicing, a task executes for a predefined slice of time and then reenters

14
the pool of ready tasks. The scheduler then determines which task should execute
next, based on priority and other factors.

22.Explain all About String

15
# String Manipulations
String class provided several methods to perform operations on Strings

#1) Length:- The length is the number of characters that a given string contains.
String class has a length() method that gives the number of characters in a String.

#2) concatenation:- Although Java uses a ‘+’ operator for concatenating two or
more strings. A concat() is an inbuilt method for String concatenation in Java.

#3) String toCharArray():- This method is used to convert all the characters of a
string into a Character Array. This is widely used in the String manipulation
programs.

#4) String charAt():- This method is used to retrieve a single character from a given
String.

#5) Java String compareTo():- This method is used to compare two Strings. The
comparison is based on alphabetical order. In general terms, a String is less than
the other if it comes before the other in the dictionary

#6) String contains():- This method is used to determine whether a substring is a


part of the main String or not. The return type is Boolean.

#7) Java String split():- As the name suggests, a split() method is used to split or
separate the given String into multiple substrings separated by the delimiters (“”,

16
“ ”, \\, etc).

#8) Java String indexOf():- This method is used to perform a search operation for a
specific character or a substring on the main String. There is one more method
known as lastIndexOf() which is also commonly used.
indexOf() is used to search for the first occurrence of the character.
lastIndexOf() is used to search for the last occurrence of the character

#9) Java String toString():- The toString() method in Java is used to provide a
string representation of an object.This method returns the String equivalent of
the object that invokes it. This method does not have any parameters.

#10) String replace ():- The replace() method is used to replace the character with
the new characters in a String

#11) Substring():- The Substring() method is used to return the substring of the
main String by specifying the starting index and the last index of the substring.
public class StringManipulation {

public static void main(String[] args) {


String str = "Hello, World! Welcome to the world of Java.";

// 1. Length of the string


int length = str.length();
System.out.println("Length of the string: " + length);
// Length of the string: 43

// 2. Character at a specific position


char charAt = str.charAt(7);
System.out.println("Character at position 7: " + charAt);
//Character at position 7: W

// 3. Substring
String substring = str.substring(4, 13);
System.out.println("Substring from position 4 to 13: " + substring);
//Substring from position 4 to 13:o, World!

// 4. Index of a character or substring


int indexOfChar = str.indexOf('W');
int indexOfSubstring = str.indexOf("world");
System.out.println("Index of character 'W': " + indexOfChar);
//Index of character 'W': 7
System.out.println("Index of substring 'world': " + indexOfSubstring);
//Index of substring 'world': 29
17
// 5. Last index of a character or substring
int lastIndexOfChar = str.lastIndexOf('o');
int lastIndexOfSubstring = str.lastIndexOf("world");
System.out.println("Last index of character 'o': " + lastIndexOfChar);
//Last index of character 'o': 35
System.out.println("Last index of substring 'world': " +
lastIndexOfSubstring);
//Last index of substring 'world': 29

// 6. Replace characters or substrings


String replacedString = str.replace("World", "Universe");
System.out.println("String after replacement: " + replacedString);
//String after replacement: Hello, Universe! Welcome to the world of
Java.

// 7. Convert to upper case and lower case


String upperCase = str.toUpperCase();
String lowerCase = str.toLowerCase();
System.out.println("String in upper case: " + upperCase);
//String in upper case: HELLO, WORLD! WELCOME TO THE WORLD OF JAVA.
System.out.println("String in lower case: " + lowerCase);
//String in lower case: hello, world! welcome to the world of java.

// 8. Trim leading and trailing spaces


String strWithSpaces = " Hello, World! ";
String trimmedString = strWithSpaces.trim();
System.out.println("String after trimming spaces: '" + trimmedString +
"'");
//String after trimming spaces: 'Hello, World!'

// 9. Split the string into an array


String[] words = str.split(" ");
System.out.println("Words in the string:");
//Words in the string:
for (String word : words) {
System.out.println(word);
/*Hello,
World!
Welcome
to
the
world
of
Java.*/
}

// 10. Concatenate strings


String concatenatedString = str.concat(" Enjoy your coding!");
System.out.println("Concatenated string: " + concatenatedString);
//Concatenated string: Hello, World! Welcome to the world of Java. Enjoy

18
your coding!

// 11. Check if string contains a substring


boolean containsSubstring = str.contains("Java");
System.out.println("Does the string contain 'Java'? " +
containsSubstring);
//Does the string contain 'Java'? true

// 12. Check if string starts with or ends with a substring


boolean startsWith = str.startsWith("Hello");
boolean endsWith = str.endsWith("Java.");
System.out.println("Does the string start with 'Hello'? " + startsWith);
//Does the string start with 'Hello'? true
System.out.println("Does the string end with 'Java.'? " + endsWith);
//Does the string end with 'Java.'? true
}
}

23. StringBuffer Class


-> StringBuffer class is used to create a mutable string object. It means, it can be
changed after it is created.

-> It is similar to String class in Java both are used to create string, but stringbuffer
object can be changed.

-> It is also thread safe i.e multiple threads cannot access it simultaneously.

# StringBuffer class methods


#1) append():- This method will concatenate the string representation of any type
of data to the end of the StringBuffer object.

#2) insert() :- This method inserts one string into another

#3) reverse():- This method reverses the characters within a StringBuffer object

#4) replace():- This method replaces the string from specified start index to the
end index

#5) capacity():- This method returns the current capacity of StringBuffer object.

24. StringBuilder class


-> StringBuilder is identical to StringBuffer except for one important difference
that it is not synchronized, which means it is not thread safe.

19
NOTE:-
⦁ When we want a mutable String without thread-safety then StringBuilder
should be used.
⦁ When we want a mutable String with thread-safety then StringBuffer should
be used
⦁ When we want an Immutable object then String should be used.

25. How We can create Immutable Class


1. First create a class and declare it as final.
2. then make all class fields as private and final to prevent them from being
modified.
3. Set the values of the properties using constructor only.
4. At last do not provide any setters for these properties.

26. Differance between Comparator and Comparable


1. Comparable :-
-> Comparable is a predefined interface available in java.lang package
-> Comparable interface having compareTo ( Object obj ) method
-> compareTo ( ) method is used to compare an object with itself and returns int
value
if( obj1 > obj2 ) ----> returns +ve no
if( obj1 < obj2 ) ----> return -ve no
if ( obj1 == obj2 ) ----> return zero (0)
Note: Comparable interface will allow us to sort the data based on only one
value. If we want to change our sorting technique then we need to modify the
class which is implementing Comparable interface. Modifying the code every time
is not recommended.
2. Comparator
=> Comparator is a predefined interface available in java.util package
=> Comparator interface having compare(Object obj1, Object obj2) method
=> Comparator is external to the element type we are comparing. It is a seperate
class.
=> We can create multiple seperate classes(that implement Comparator) to

20
compare by different members.
=> that class override the compare() and provide the logic for compare.

27. What is differance between " == " and equal() ?


# '==' Operator :-
Purpose: The == operator is used to compare references, i.e., it checks if two
references point to the same memory location.

Usage:
⦁ For primitive types (e.g., int, float, char), == compares the actual values.
⦁ For objects, == compares the memory addresses (references) to determine if
they refer to the same object.

# .equals() Method
Purpose: The .equals() method is used to compare the contents of two objects to
check if they are "equal" in terms of their data, rather than their memory
references.

Usage:
1. For most classes, the .equals() method needs to be overridden to define what
"equal" means for objects of that class. For example:

⦁ In String, .equals() checks if the characters of the strings are the same in
sequence and length.

⦁ In custom classes, you override .equals() to compare the relevant fields of


the objects.

2. By default, the .equals() method in the Object class behaves the same as ==,
which checks for reference equality (i.e., whether the two references point to
the same object in memory).

28. What is Exception ?


-> Un-expected and Un-wanted situation in the program execution is called as
Exception.
21
-> Exception will distrub normal flow of the program execution
-> When Exception occured program will be terminated abnormally
Q) What is the difference between Exception and Error ?
-> Exceptions can be handled where as Errors can't be handled.
# Exception Types :-
-> Exceptions are divided into 2 types
1. Checked Exceptions ( Compile-time Exception): These are exceptions that are
identified at compile-time but occur at runtime. The compiler ensures that these
exceptions are either handled using a try-catch block or declared in the throws
clause of the method.
Examples: IOException, FileNotFoundException, SQLException, etc.
2. Unchecked Exceptions ( Runtime Exception): These are exceptions that occur at
runtime and are not checked at compile-time. The compiler does not force you to
handle or declare them.
Examples: NullPointerException, ArithmeticException, etc

29. Exception Handling :-


-> Java provided 5 keywords to handle exceptions
1) try
2) catch
3) finally
4) throws
5) throw
try : it is used to keep our risky code
catch : It is used to catch the exception occured try block
finally : to execute clean up activities
22
throws : It is used to hand over checked exceptions to caller method / jvm
Note: It (throws) is used to ignore checked exceptions
throw : It is used to create the exception

1. try block:-
-> It is used to keep risky code
syntax:
try {
// stmts
}
Note: We can't write only try block. try block required catch or finally (it can have
both also)
try with catch : valid combination
try with multiple catch blocks : valid combination
try with finally : valid combination
try with catch & finally : valid combination
only try block : invalid
only catch block : invalid
only finally block : invalid
2. catch :-
-> catch block is used to catch the exception which occurred in try block
-> To write catch block , try block is mandatory
-> One try block can contain multiple catch blocks also
syntax:
try {
// logic
} catch ( Exception e ){
// logic to catch exception info
}
Note: If exception occurred in try block then only catch block will execute
otherwise catch block will not execute
Note: Catch blocks order should be child to parent
NOTE:
⦁ When we write multiple catch blocks if Exceptions are not having any Is-A
23
relation then we can write catch block in any order otherwise we must write
catch blocks in a order like first child class and followed by parent class.
⦁ We can not write 2 catch blocks which are going to catch same exceptions

3. finally block :-
-> It is used to perform resource clean up activities
Ex: file close, db connection close etc....
-> finally block will execute always ( irrespective of the exception )
try with finally : valid combination
try with catch and finally : valid combination
catch with finally : invalid combination
only finally : invalid combination

30. Example Of User Define Exception:-


Sir, lets take a scanerio, ResourceNotFoundException.Now in order to throw an
exception when a particular resource is not found.
Firstly i will create a custom ExceptionType class named as ResourceNotFound
which Extends RunTimeException.
public class ResourceNotFoundException extends RuntimeException {
//create constructor
public ResourceNotFoundException(String message) {
super(message);
}}
When this exception object is created and to handle that i will create a custom
Exception Handler class called GlobalExceptionHandler class which is annotated by
@ControllerAdvise. in this class we have a method in it, let's say
handleResourceNotFoundException() method, which is annoted with
@ExceptionHandler annotation. now this method will act like a Catch-Block,
So When you make a request to any URL (/api/resource/{id}) with an ID that
doesn't exist, the ResourceNotFoundException will be thrown, and the
handleResourceNotFoundException() method will handle it and return a custom
indication message, which will be given back as a response to postman or any
other client,with the HTTP status code 404 NOT_FOUND.
@ControllerAdvice

24
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
@ExceptionHandler(ResourceNotFoundException.class)
public ResponseEntity<String>
handleResourceNotFoundException(ResourceNotFoundException ex) {
return new ResponseEntity<>(ex.getMessage(), HttpStatus.NOT_FOUND);
}
}
Extending ResponseEntityExceptionHandler is optional. Use it if you need
advanced or consistent handling for predefined exceptions (like
MethodArgumentNotValidException,HttpRequestMethodNotSupportedException)
provided by Spring. If your focus is only on custom exceptions, you can achieve
that without extending it.
GlobalExceptionHandler is a custom class that can handle exceptions globally and
is more flexible in terms of what kind of responses it can generate.
31. What is the purpose of User Defined Exceptions ?
# Purpose of User-Defined Exceptions
⦁ a. Custom Error Handling: They allow developers to handle specific error
conditions that are not covered by the standard Java exceptions. For
example, an application might need to handle a custom business rule
violation or specific domain-related errors.

⦁ b. Readability and Maintainability: By defining custom exceptions, the code


becomes more readable and maintainable. It is easier to understand the
context of the error when the exception is explicitly named according to the
problem it represents.

⦁ c. Better Abstraction: User-defined exceptions provide a higher level of


abstraction by hiding the implementation details of error handling from the
client code.

⦁ d. Enhanced Debugging: Custom exceptions can include additional


information relevant to the error, making it easier to debug and diagnose

25
issues.

32. Java try with Resource Statement


⦁ -> This feature add another way to exception handling with resources
management. It is also referred as automatic resource management. It close
resources automatically by using AutoCloseable interface.
⦁ -> Resource can be any like: file, connection etc and we don't need to
explicitly close these, JVM will do this automatically.
⦁ -> Suppose, we run a JDBC program to connect to the database then we have
to create a connection and close it at the end of task as well. But in case of
try-with-resource we don’t need to close the connection, JVM will do this
automatically by using AutoCloseable interface.
public class DatabaseExample {
public static void main(String[] args) {
String url = "jdbc:mysql://localhost:3306/mydatabase";
String user = "root";
String password = "password";

// Using try-with-resources to ensure Connection, Statement,


and ResultSet are closed automatically
try (Connection conn = DriverManager.getConnection(url,user,
password);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM
mytable")) {
while (rs.next()) {
int id = rs.getInt("id");
String name = rs.getString("name");
System.out.println("ID: " + id + ", Name: " + name);
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}

# Points to Remember
⦁ A resource is an object in a program that must be closed after the program
has finished.
⦁ Any object that implements java.lang.AutoCloseable or java.io.Closeable can
26
be passed as a parameter to try statement.
⦁ All the resources declared in the try-with-resources statement will be closed
automatically when the try block exits. There is no need to close it explicitly.
⦁ We can write more than one resources in the try statement.
⦁ In a try-with-resources statement, any catch or finally block is run after the
resources declared have been closed.

33. If I write return at the end of the try block, will the finally block still
execute?
Yes, even if you write return as the last statement in the try block and no exception
occurs, the finally block will execute. The finally block will execute and then the
control return.

34. If I write System.exit(0); at the end of the try block, will the finally
block still execute?
No. In this case the finally block will not execute because when you say
System.exit(0); the control immediately goes out of the program, and thus finally
never executes.

35. How to catch multiple exceptions using single catch block ?


You use the pipe ('|') character to separate different exception types in the catch
block. Here's the general syntax:

try {
// Code that might throw exceptions
} catch (ExceptionType1 | ExceptionType2 | ExceptionType3 e) {
// Handle the exceptions
}

36. OOPs Concept :-


36.1. Encapsulation:-
-> Encapsulation is used to combine our variables & methods as single entity unit
-> Encapsulation provides data hiding
-> We can achieve encapsulation using Classes

27
class Demo {
//variables
// methods
}
36.2. Abstraction
-> Abstraction means hiding un-necessary data and providing only required data
-> We can achieve Abstraction using interfaces & abstract classes
Ex : we will not bother about how laptop working internally
We will not bother about how car engine starting internally

36.3. Polymorphism
-> If any object is exhibiting multiple behaviours based on the Situation then it is
called as Polymorphism.
Ex:- 1 : in below scenario + symbol having 2 different behavirours
10 + 20 ===> 30 (Here + is adding)
"hi" + "hello" ==> hihello (here + is concatinating)
-> Polymorphism is divided into 2 types
1) Static polymorphism / Compile-time Polymorphism
Ex: Overloading
Static: The binding of the method call to the method is fixed at the time the
program is compiled. The compiler knows exactly which method to invoke based
on the parameters.
Compile-time: The method resolution occurs when the code is compiled, not when
the program is executed, hence making it a compile-time decision.

2) Dynamic polymorphism / Run-time Polymorphism


Ex: Overriding
Dynamic: The method that gets invoked is chosen dynamically at runtime, based
on the actual object type that the reference variable points to, not based on the
type of the reference.

Run-time: The decision of which overridden method to call is made at runtime, not
at compile time.

28
i. Method Overloading:- The process of writing more than one method with same
name and different parameters is called as Method Overloading.
=> When methods are performing same operation then we should give same name
hence it will improve code readability.
Ex:
substring (int start), substring(int start, int end)
void wait(), void wait(long timeout), void wait(long timeout, int nanos)
=> In Method Overloading scenario, compiler will decide which method should be
called.

# In method overloading while writing the method signature we have to follow


following 3 Rules
⦁ Method name must be same for all methods
⦁ List of parameters must be different like different type of parameters,
different number of parameters, different order of parameters.
⦁ Return type is not considered in method overloading; it means we never
decide method overloading with return type
ii. Method Overriding:- The process of writing same methods in Parent class &
Child class is called as Method Overriding.
Note: When we don't want to execute Parent method implementation, then we
can write our own implementation in child class using method Overriding.

Example 1 : Object class equals ( ) method will compare address of the objects
where String class equals ( ) method will compare content of the objects. Here
String class overriding equals ( ) method.

Example 2 : doFilter() method of OncePerRequestFilter Class.


# While Method overriding and writing the method signature, we must follow
following rules.
⦁ Method name must be same
⦁ List of parameters must be same
⦁ Return type must be same
⦁ Private, final and static methods cannot be overridden.
⦁ There must be an IS-A relationship between classes (inheritance).
What is Method Hinding ?
29
It occurs when a static method in a subclass has the same name, return type, and
parameters as a static method in the superclass.

36.4. Inheritence
-> Extending the properties from one class to another class is called as Inheritance
-> The main aim of inheritance is code re-usability
Ex: child will inherit the properties from parent
Note: Whenever we create child class object, then first it will execute parent class
zero-param constructor and then it will execute child class constructor. because
Child should be able to access parent properties hence parent constructor will
execute first to initialize parent class properties.
Note: In java, one child can't inherit properties from two parents at a time

37. Why should we go for Method Overriding?


Whenever child class don't want to use definition written by the Parent class and
want to use its own logic then we have to use method overriding it means we have
to override the same method with new definition inside the child class.
NOTE: Static methods cannot be overridden because, a static method is bounded
with class whereas instance method is bounded with object.

38. Why should we go for Method Overloading?


When we want to maintain the flexibility in our application like using one method
performing several operations then we can use this method overloading.

39. When to declare variable as static or non-static?


-> If we want to store different value based on object then use instance variable
-> If we want to store same value for all objects then use static variable

40. Java 8 Feature


1) Interface changes
1.1 ) Default Methods
1.2 ) Static Methods
2) Lambda Expressions
3) Functional Interfaces (@FunctionalInterface)

30
3.1 ) Predicate & BiPredicate
3.2 ) Consumer & BiConsumer
3.3 ) Supplier
3.4 ) Function & BiFunction
4) forEach ( ) method
5) Optional class (to avoid null pointer exceptions)
6) Date & Time API
7) ****** Stream API ********
8) Method References & Constructor References
9) Spliterator
10) StringJoiner
40.1) Interface changes
1. Interface can have concreate methods from 1.8v
2. Interface concrete method should be default or static
3. interface default methods we can override in impl classes
4. interface static methods we can't override in impl classes
5. We can write multiple default & static methods in interface
6. Default & Static method introduced to provide backward compatibility

40.2) Lambda Expressions


-> Java is called as Object Oriented Programming language. Everything will be
represented using Classes and Objects.
-> From 1.8v onwards Java is also called as Functional Programming Language.
-> In OOP language Classes & Objects are main entities. We need to write methods
inside the class only.
-> Functional Programming means everything will be represented in the form of
functions. Functions can exist outside of the class. Functions can be stored into a
reference variable. A function can be passed as a parameter to other methods.
-> Lambda Expressions introduced in Java to enable Functional Programming.
What is Lambda
-> Lambda is an anonymous function
- No Name
- No Modifier

31
- No Return Type

40.3) Functional Interfaces


-> The interface which contains only one abstract method is called as Functional
Interface
-> Functional Interfaces are used to invoke Lambda expressions
-> Below are some predefined functional interfaces
Runnable ------------> run ( ) method
Callable ----------> call ( ) method
Comparable -------> compareTo ( )
-> To represent one interface as Functional Interface we will use
@FunctionalInterface annotation.
Note: When we write @FunctionalInterface then our compiler will check interface
contains only one abstract method or not.
-> In Java 8 several predefined Functional interfaces got introduced they are
1) Predicate & BiPredicate
2) Consumer & BiConsumer
3) Supplier
4) Function & BiFunction

Predicate ------> takes inputs ----> returns true or false ===> test ( )
Supplier -----> will not take any input---> returns output ===> get ( )
Consumer ----> will take input ----> will not return anything ===> accept ( )
Function -----> will take input ---> will return output ===> apply ( )

1. Predicate
-> It is predefined Functional interface
-> It is used check condition and returns true or false value
-> Predicate interface having only one abstract method that is test (T t)
2. Supplier Functional Interface
-> Supplier is a predefined functional interface introduced in java 1.8v
-> It contains only one abstract method that is get ( ) method
-> Supplier interface will not take any input, it will only returns the value.

32
3. Consumer Functional Interface
-> Consumer is predefined functional interface
-> It contains one abstract method i.e accept (T t)
-> Consumer will accept input but it won't return anything
Note: in java 8 forEach ( ) method got introduced. forEach(Consumer consumer)
method will take Consumer as parameter.
4. Function Functional Interface
-> Function is predefined functional interface
-> Funcation interface having one abstract method i.e apply(T r)
interface Function<R,T>{
R apply (T t);
}
-> It takes input and it returns output

40.4) forEach (Consumer c) method


-> forEach (Consumer c) method introduced in java 1.8v
-> forEach ( ) method added in Iterable interface
-> forEach ( ) method is a default method (it is having body)
-> This is method is used to access each element of the collection (traverse
collection from start to end)

40.5) Optional Class


-> java.util.Optional class introduced in java 1.8
-> Optional class is used to avoid NullPointerExceptions in the program
Q) What is NullPointerException (NPE) ?
Ans) When we perform some operation on null value then we will get
NullPointerException
String s = null;
s.length ( ) ; // NPE
-> To avoid NullPointerExceptions we have to implement null check before
performing operation on the Object like below.
String s = null;
if( s! = null ) {
System.out.println(s.length ( ));
33
}
Note: In project there is no gaurantee that every programmer will implement null
checks. If any body forgot to implement null check then program will run into
NullPointerException.
-> To avoid this problem we need to use Optional class .

40.6) Why Date and Time Api is Developed in Java 8 ?


1. Thread safety & Mutability :
⦁ The existing classes such as Date and Calendar does not provide thread
safety and Mutable. Hence it leads to hard-to-debug concurrency issues that
are needed to be taken care.
⦁ The new Date and Time APIs of Java 8 provide thread safety and are
Immutable, hence avoiding the concurrency issue from developers.

2. Bad API designing:


⦁ The classic Date and Calendar APIs does not provide methods to perform
basic day-to-day functionalities, like adding or subtracting days, months, or
years easily, parsing and formatting dates consistently, or handling time
zones intuitively. Additionally, months were zero-indexed in Calendar,
leading to confusion and errors.
⦁ The New Date and Time Api are ISO-centric and provides number of different
methods for performing operations regarding date, time, duration and
periods.

3. Difficult time zone handling:


⦁ To handle the time-zone using classic Date and Calendar classes is difficult
because the developers were supposed to write the logic for it.
⦁ With the new APIs, the time-zone handling can be easily done with Local and
ZonedDate/Time APIs.

4. Date and Time API in Java 8:- LocalDate, LocalTime, LocalDateTime,


ZonedDateTime, Instant, Duration, Period, Time Zones.

40.7) Stream Api

34
-> Stream API provided several methods to perform Operations on the data.

1. Filtering with Streams


-> Filtering means getting required data from original data
Ex: get only even numbers from given numbers
-> To apply filter on the data, Stream api provided filter ( ) method
Ex : Stream filter (Predicate p)

2. Mapping Operations
-> :- It is used to apply a function to each element in a stream, transforming it into
another object. The result is still a stream, but with the transformed objects.
Ex : Stream map (Function function)

3. Slicing Operations with Stream


1) distinct ( ) => To get unique elements from the Stream
Eg:-names.stream().distinct().forEach(name ->System.out.println(name));

2) limit ( long maxSize ) => Get elements from the stream based on given size
Eg :- names.stream().limit(3).forEach(c -> System.out.println(c));

3) skip (long n) => It is used to skip given number of elements from starting
position of the stream
Eg :- names.stream().skip(3).forEach(c -> System.out.println(c));

4. Matching Operations with Stream


1) boolean anyMatch (Predicate p )
2) boolean allMatch (Predicate p )
3) boolean noneMatch (Predicate p )
Eg :- boolean status1 = persons.stream().anyMatch(p ->
p.country.equals("INDIA"));

5. Group By using Stream


-> Group By is used categorize the data / Grouping the data
-> When we use groupingBy ( ) function with stream they it will group the data as
Key-Value(s) pair and it will return Map object

Single argument: Just groups elements (e.g., into lists).


35
Eg :- list.stream().collect(Collectors.groupingBy(e -> e.country));

Two arguments: Groups elements and applies further processing (e.g., counting or
summing).Eg:-
list.stream().collect(Collectors.groupingBy(Function.identity(),Collectors.counting();

41. What is Differance between chars() and Stream()

# Can We use stream on String ?


No, you cannot directly use stream() on a String because a String in Java does not
implement the Stream interface. However, you can process a String using streams
indirectly by converting it into a collection of characters or by using its chars() or
codePoints() methods.

42. Optional.of() v/s Optional.ofNullable()

43. What is Intermediate operation and terminal operation IN JAVA 8 ?


36
Intermediate Operations:- Lazy operations that return a new stream and are not
executed until a terminal operation is invoked. Examples include
Sorting ----> sorted()
Filters ----> filter ( )
Mappings ----> map ( ) & flatMap ( )
Slicing ----> distinct ( ) & limit () & skip ( )

Terminal Operations:- Operations that produce a result or a side-effect and mark


the end of the stream processing. Examples include forEach(), reduce(), toArray(),
count(),
Finding ---> findFirst ( ) & findAny ( )
Matching ---> anyMatch ( ) & allMatch ( ) & noneMatch ( )
Collecting ---> collect ( )

By understanding these operations, you can effectively use the Stream API in Java
to process collections and other data sources in a functional and efficient manner.

44. map and flatmap function in JAVA 8 ?


The 'map()' method in Java Streams API is used to transform elements of a
stream. It applies a function to each element and produces a new stream with
the transformed values. (e.g., converting names to uppercase).
public class MapExample {
public static void main(String[] args) {
List<String> names = List.of("John", "Jane", "Jack", "Doe");
// Convert all names to uppercase using map
List<String> upperCaseNames = names.stream()
.map(String::toUpperCase)
.collect(Collectors.toList());
System.out.println(upperCaseNames); // Output: [JOHN, JANE,
JACK, DOE]
}
}
'flatMap' Function:- It is used when we want to flatten a nested structure, such as
a list of lists, into a single list.
It transforms each element of the stream into a stream itself and then flattens the
resulting streams into a single stream (e.g., flattening a list of lists).
public class FlatMapExample {
public static void main(String[] args) {
List<List<String>> namesList = List.of(
37
List.of("John", "Jane"),
List.of("Jack", "Doe")
);
// Flatten the list of lists into a single list using flatMap
List<String> flatList = namesList.stream()
.flatMap(List::stream)
.collect(Collectors.toList());
System.out.println(flatList); // Output: [John, Jane, Jack,
Doe]
}
}
45. Sequential Stream and Parallel Stream
Sequential Stream:- Sequential Streams are use a single thread to process the
pipelining. Any stream operation without explicitly specified as parallel is treated
as a sequential stream.
public class SequentialStreamExample {
public static void main(String[] args) {
List<String> names = List.of("John", "Jane", "Jack", "Doe");
System.out.println("Processing with Sequential Stream:");
names.stream()
.map(String::toUpperCase)
.forEach(name -> {
System.out.println(name + " - " +
Thread.currentThread().getName());
});
}
}
Parallel Stream:- Using parallel streams, our code gets divide into multiple streams
which can be executed parallelly on separate cores of the system and the final
result is shown as the combination of all the individual core’s outcomes.
Note: If we want to make each element in the parallel stream to be ordered, we
can use the forEachOrdered() method, instead of the forEach() method.
public class ParallelStreamExample {
public static void main(String[] args) {
List<String> names = List.of("John", "Jane", "Jack", "Doe");
System.out.println("Processing with Parallel Stream:");
names.parallelStream()
.map(String::toUpperCase)
.forEach(name -> {
System.out.println(name + " - " +
Thread.currentThread().getName());
});
}

38
}
Difference Summary:
⦁ Execution Order: In sequential streams, elements are processed sequentially
(one after the other), whereas in parallel streams, elements are processed
concurrently (using multiple threads).
⦁ Performance: Parallel streams are faster for processing large datasets
because they leverage multi-core processors, but they can create overhead
for small datasets.
⦁ Threading: Sequential streams run on a single thread, whereas parallel
streams use multiple threads.
⦁ Use Case: Sequential streams are suitable for simpler and predictable order
processing. Parallel streams are suitable for large datasets and performance-
intensive tasks.
Cautions:
⦁ When using parallel streams, it is crucial to consider thread safety and side
effects, as multiple threads can concurrently access and modify data.
⦁ Parallel streams are not always efficient; testing and profiling are essential to
determine whether parallelism actually improves performance in your
specific use case.

46. Collection interface :-


-> It is super interface for List, Set and Queue
-> Collection interface providing several methods to store and retrieve objects
A. List Interface :-
-> Exetending properties from Collection interface
-> Allow duplicate objects
-> It will maintain objects insertion order
-> Duplicates and null value are allowed
-> It is having 4 implementation classes
1) ArrayList
2) LinkedList
3) Vector
4) Stack

39
List l = new List ( ); // invalid
List l = new ArrayList ( ) ; // valid
List l = new LinkedList ( ) ; // valid
1. ArrayList :-
-> Implementation class of List interface
-> Duplicate objects are allowed
-> Null values are accepted
-> Insertion order preserved
-> Internal data structure of ArrayList is growable array
-> Default Capacity is 10
-> homegeneuous & hetereogenious data supported
-> Not Synchronized
ArrayList Constructors
1) ArrayList al = new ArrayList ( ) ;
2) ArrayList al = new ArrayList (int capacity);
3) ArrayList al = new ArrayList (Collection c);
Methods of ArrayList
1) add (Object obj ) ---> Add object at end of the collection
2) add(int index, Object) --> Add object at given index
3) addAll (Collection c) ---> To add collection of objects at end of the collection
4) remove(Object obj) ---> To remove given object
5) remove(int index) ----> To remove object based on given index
6) get(int index) --> To get object based on index
7) contains(Object obj) ---> To check presense of the object
8) clear( ) ---> To remove all objects from collection
9) isEmpty ( ) ---> To check collection is empty or not
10) retainAll(Collection c) -->Keep only common elements and remove remaining
object
11) indexOf(Object obj) --> To get first occurence of given obj
12) lastIndexOf(Object obj) ---> To get last occurance of given object
13) set(int index, Object obj) ---> Replace the object based on given index
14) iterator ( ) --> Forward direction
15) listIterator ( ) --> Forward & back

40
1) ArrayList class is not recommended for insertions because it has to perform lot
of shiftings
2) ArrayList class is recommended for retriveal operations because it will retrieve
based on index directly
Insertion Operation -> Best case ( insert at end) & Worst case ( insert at i=0)
Deletion Operation -> Best case ( del. Last element) & Worst case ( del. i=0)
Searching Operation -> Best case ( got at i=0) & Worst case ( got at end)
2. LinkedList
-> Implementation of List interface
-> Internal data structure is double linked list
-> insertion order preserved
-> duplicate objects are allowed
-> null objects also allowed
-> homogenious & hetereogenious data we can store
-> Not Synchronized
3. Vector
-> It is same as ArrayList except it is synchronized.
-> Implementation class of List interface
-> Internal data structure is growable array
-> duplicates are allowed
-> Null Allowed
-> insertion order preserved
-> This is synchronized
-> Vector is called as legacy class ( jdk v 1.0)
-> To traverse vector we can use Enumeration as a cursor
-> Enumeration is called as Legacy Cursor (jdk 1.0v)
4. Stack
-> Implementation class of List interface
-> Extending from Vector class
-> Data Structure of Stack is LIFO (last in first out)
⦁ push ( ) ---> to insert object
⦁ peek ( ) ---> to get last element
⦁ pop ( ) ---> to remove last element
41
Note:-
1) ArrayList ---------> Growable Array
2) LinkedList ----------> Double Linked List
3) Vector -------------> Growable Array & Thread Safe
4) Stack -----------> L I F O
1) Iterator ----> forward direction ( List & Set )
2) ListIterator ---> forward & backward direction ( List impl classes )
3) Enumeration ----> forward direction & supports for legacy collection classes

B. Set :-
-> Set is a interface available in java.util package
-> Set interface extending from Collection interface
-> Set is used to store group of objects
-> Duplicate objects are not allowed
-> Null is allowed
-> Supports Homogenious & heterogenious
-> Insertion order will not be maintained
Set interface Implementation classes
1) HashSet
2) LinkedHashSet
3) TreeSet
1. HashSet
-> Implementation class of Set interface
-> Duplicate Objects are not allowed
-> Null is allowed
-> Insertion order will not be maintained
-> Initial Capacity is 16
-> Load Factor 0.75
-> Internal Datastructure is Hashtable
-> Not synchronized
Constructors
HashSet hs = new HashSet( );
HashSet hs = new HashSet(int capacity);
42
HashSet hs = new HashSet(int capacity, float loadFactor);
2. LinkedHasSet
-> Implementation class for Set interface
-> Duplicates are not allowed
-> Null is allowed
-> Insertion order will be preserved
-> Internal Data Structure is Hash table + Double linked list
-> Initial capacity 16
-> Load Factory 0.75
-> Not synchronized
Note: HashSet will not maintain insertion order where as LinkedHashSet will
maintain insertion order.
HashSet will follow Hastable data structure where as LinkedHashSet will follow
Hashtable + Double Linked List data structure.
3. TreeSet
-> Implementation class for Set interface
-> It will maintain Natural Sorting Order
-> Not follow the Insetion Order
-> Duplicates are not allowed
-> Null values are not allowed
-> Not synchronized
Note: When we add null value it will try to compare null value with previous object
then we will get NullPointerException.
-> It supports only homogeniuous data
Note : TreeSet should perform sorting so always it will compare newly added
object with old object. In order to compare the objects should be of same type
other wise we will get ClassCastException.
-> Internal data structure is binary tree.
C. Map
-> Map is an interface available in java.util package
-> Map is used to store the data in key-value format
-> One Key-Value pair is called as one Entry
-> One Map object can have multiple entries
43
-> In Map, keys should be unique and values can be duplicate
-> If we try to store duplicate keys in map then it will replace old key data with new
key data
-> We can take Key & Value as any type of data
-> Insertion Order not maintain
-> Map interface having several implementation classes
1) HashMap
2) LinkedHashMap
3) TreeMap
4) Hashtable
5) IdentityHashMap
6) WeakHashMap
Map methods
1) put (k,v) ---> To store one entry in map object
2) get(k) ---> To get value based on given key
3) remove(k) ---> To remove one entry based on given key
4) containsKey(k) ---> To check presense of given key
5) keySet ( ) ---> To get all keys of map
6) values ( ) ----> To get all values of the map
7) entryset ( ) --> To get all entries of map
8) clear ( ) --> To remove all the entries of map
9) isEmpty ( ) --> To check weather map obj is empty or not
10) size ( ) --> To get size of the map (how many entries avaiable)

1. HashMap
-> It is impl class for Map interface
-> Used to store data in key-value format
-> Default capacity is 16
-> Load factor 0.75
-> Underlying datastructure is hashtable
-> Insertion Order will not be maintained by HashMap
-> Not synchronized
2. LinkedHashMap
-> Implementation class for Map interface
-> Maintains insertion order
44
-> Data structure is hashtable + double linkedlist
3. TreeMap
-> Implementation class for Map interface
-> It maintains natural sorted order for keys
-> Internal Data structure for Tree map is binary tree
Hashtable
-> It is implementation class for Map interface
-> Default capacity is 11
-> Load factor 0.75
-> key-value format to store the data
-> Hashtable is legacy class (jdk 1.0 v)
-> Hashtable is synchronized
-> Does not allow duplicate keys but values can be.
-> Does not allowed null key or value.
-> If thread safety is not required then use HashMap instead of Hastable.
-> If thread safety is important then go for ConcurrentHashMap instead of
Hashtable.
D. Queue
-> It is extending properties from Collection interface
-> It is used to store group of objects
-> Internal Data structure is FIFO (First in First out)
-> It is ordered list of objects
-> insertion will happen at end of the collection
-> Removal will happen at beginning of the collection
47. What is the contract between hashCode ( ) & equals ( ) methods
1. If two objects are equal, their hashCode() must be the same:
⦁ If obj1.equals(obj2) returns true, then obj1.hashCode() must be equal to
obj2.hashCode().
⦁ This ensures that when two objects are equal, they are stored in the same
bucket in hash-based collections.
2. If two objects have the same hashCode(), they may or may not be equal:
⦁ Just because two objects have the same hash code does not mean they are
equal. This is known as a hash collision. The equals() method must be used to

45
determine actual equality
48. How HashMap works internally ?
The HashMap is a HashTable based implementation. It internally maintains an
array,also called as “bucket array”.
The size of the bucket array is determined by the initial capacity of the HashMap,
like the default is 16(0-15).
Each index position in the array is a bucket that can hold multiple Node objects
using a LinkedList.
But when entries in a single bucket reach a threshold (TREEIFY_THRESHOLD,
default value 8) then Map converts the bucket’s internal structure from the
linked list to a RedBlackTree (JEP 180). All Entry instances are converted to
TreeNode instances. So pessimistic O(n) performance Converted to O(log n).
and when nodes in a bucket reduce less than UNTREEIFY_THRESHOLD the Tree
again converts to LinkedList. This helps balance performance with memory usage
because TreeNodes takes more memory than Map.Entry instances.
So Map uses Tree only when there is a considerable performance gain in
exchange for memory wastage.

When we insert a key-value pair into a HashMap, the key's hashCode() method is

46
called to generate a hash value.
To improve hash code distribution and reduce collisions, HashMap applies a hash
spreading function:
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
⦁ key.hashCode() : retrieves the hash code of the key.
⦁ h >>> 16 : shifts the higher 16 bits of the hash code to the lower 16 bits.
⦁ XOR (^) : combines the original hash code and the shifted value to mix the
bits.
This ensures a more uniform distribution of hash values, reducing collisions .
After obtaining the transformed hash value, HashMap calculates the index where
the entry will be stored.
index = hash & (n - 1);
⦁ hash : The transformed hash value.
⦁ n : The size of the array (must be a power of 2, like 16, 32, etc.).
⦁ & (bitwise AND) : Ensures the index is always within the array bounds (0 to
n-1).

If two keys have the same index value, in this case ,equal() method check that both
keys are equal or not. if key are same, replace the value with current value.
Otherwise, connect this node object to the existing node object through the
LinkedList, Hence both Keys will be stored at same index.

49. How does HashMap avoid duplicate keys?


HashMap avoids duplicate keys by using the hashCode() and equals() methods of
the keys.
⦁ When a key-value pair is added to the HashMap, the hash code of the key is
calculated to determine the bucket where the entry will be stored.
⦁ If the bucket is empty, the key-value pair is added.
⦁ If the bucket already contains one or more entries (hash collision), HashMap

47
compares the keys using the equals() method:
⦁ If equals() returns true, it means the key already exists in the map, and
the value is updated (replaced) with the new one.
⦁ If equals() returns false, it means the keys are different, and the new key-
value pair is added to the same bucket as part of a linked list (or tree in
case of many collisions, starting from Java 8).

50. How HashSet avoids duplicate Elements?


HashSet avoids duplicate elements by using the hashCode() and equals() methods
of the objects it stores.
⦁ When a new element is added, HashSet first calculates its hash code to
determine the bucket where it might be stored.
⦁ If there is already an element in the same bucket with the same hash code,
HashSet then calls the equals() method to compare the new element with
the existing one.
⦁ If equals() returns true, it means the element is a duplicate, and HashSet
does not add it.
⦁ If equals() returns false, the new element is added to the collection.

51. What is the internal working of ArrayList?


When we add a new element to the ArrayList using the add() method :
1. Check for space: ArrayList first checks whether there is enough space in the
underlying array to store the new element.
2. Add the element:
⦁ If there is enough space, the new element is added at the next available
index.
⦁ If there is no space, ArrayList automatically increases the size of the array:
⦁ A new array is created with a larger capacity, typically 1.5 times the
current capacity (in some implementations, it is 1.5 or slightly more).
⦁ All existing elements are copied from the old array to the new array.
3. Store the element: The new element is then added to the next available index in
the newly created array.

48
52. What is the difference between Collection, Collections & Collections
Framework ?
Collection :- Collection is a container to store group of objects. We have an
interface with a name Collection (java.util). It is root interface in Collections
framework.
Collections :- Collections is a class available in java.util package
(Providing ready made methods to perform operations on objects)
Collections framework :- Collection interface & Collections class are part of
Collections framework. Along with these 2 classes there are several other classes
and interfaces in Collections framework.

53. What is diff between ConcurrentHashMap and HashMap ?


⦁ HashMap: Best suited for single-threaded or externally synchronized
scenarios.Allows one null key and multiple null values.

⦁ ConcurrentHashMap: Designed for high performance in multi-threaded


environments with built-in thread safety.Does not allow null keys or null
values.

54. Fail safe and fail fast collections


1. Fail-Fast Collections:
a. Definition:
Fail-fast collections wo hain jo concurrent modification ko detect karte hain
aur immediately ek ConcurrentModificationException throw karte hain jab
collection modify ho raha ho aur usi waqt iterate bhi ho raha ho.
b. Examples:
ArrayList
HashMap
HashSet
c. Working:
Fail-fast collections internally ek modCount variable maintain karte hain, jo
collection me har modification (add/remove) ke sath increment hota hai. Jab
aap iterator use karke collection ko traverse karte hain, to yeh modCount
49
check kiya jata hai. Agar modCount badal gaya (during traversal), to
ConcurrentModificationException throw hoti hai.
List<String> list = new ArrayList<>();
list.add("A");
list.add("B");

Iterator<String> iterator = list.iterator();


while (iterator.hasNext()) {
list.add("C"); // ConcurrentModificationException yaha throw hoga
System.out.println(iterator.next());
}

d. Use-Case:
Fail-fast collections generally un situations me use hote hain jahan data
consistency aur correctness zaroori ho. Agar koi modification detect hota hai,
to exception throw karke process ko rok diya jata hai.

2. Fail-Safe Collections:
a. Definition:
Fail-safe collections wo hain jo concurrent modification detect nahi karte
hain aur safe traversal allow karte hain. Yeh collections ek snapshot copy
banate hain original collection ka aur us copy ko traverse karte hain.
b. Examples:
CopyOnWriteArrayList
ConcurrentHashMap
ConcurrentSkipListSet

c. Working:
Fail-safe collections internally ek copy banate hain original collection ka jab
iterator create hota hai. Isliye, agar original collection modify ho raha ho, to
bhi iterator apne snapshot ko traverse karta hai aur koi exception nahi throw
hoti.
CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>();
list.add("A");
list.add("B");

Iterator<String> iterator = list.iterator();


while (iterator.hasNext()) {
list.add("C"); // No ConcurrentModificationException here

50
System.out.println(iterator.next());
}

d. Use-Case:
Fail-safe collections generally concurrent programming me use hote hain
jahan multiple threads ek hi collection pe kaam kar rahe hote hain. Yeh
collections thread-safe bhi hote hain aur ensure karte hain ki traversal aur
modification safely ho.
55. What is the difference between HashMap and WeakHashMap ?
=> HashMap keys will have strong reference that means they will maintain a
reference hence they are not elgible for Garbage Collector

=> WeakHashMap keys will have weak reference that means they are eligible for
Garbage Collection.

=> GC will dominate WeakHashMap

56. What is the difference between HashMap and IdentityHashMap ?


=> HashMap will use equals ( ) method to compare content of keys to find
duplicate keys

=> IdentityHashMap will use == operator to compare address of keys to find


duplicate keys

57. Cursors of Collection Framework


We have following 3 types of cursors in Collection Framework
1.Iterator
2.ListIterator
3.Enumeration
57.1. Iterator
-> this cursor is used to access the elements in forward direction only
-> this cursor can be applied Any Collection (List, Set)
-> while accessing the methods we can also delete the elements
-> Iterator is interface and we cannot create an object directly.
-> if we want to create an object for Iterator, we have to use iterator () method

# Creation of Iterator:
51
Iterator it = c.iterator();
here iterator() method internally creates and returns an object of a class which
implements Iterator interface.
# Methods
1. boolean hasNext()
2. Object next()
3. void remove()

57.2. ListIterator
-> This cursor is used to access the elements of Collection in both forward and
backward directions
-> This cursor can be applied only for List category Collections
-> While accessing the methods we can also add,set,delete elements
-> ListIterator is interface and we can not create object directly.
-> If we want to create an object for ListIterator we have to use listIterator()
method
# Creation of ListIterator:

ListIterator<E> it = l.listIterator();

Here listIterator() method internally creates and returns an object of a class which
implements ListIterator interface.
# Methods
1. boolean hasNext();
2. Object next();
3. boolean hasPrevious();
4. Object previous();
5. int nextIndex();
6. int previousIndex();
7. void remove();
8. void set(Object obj);
9. void add(Object obj);
57.3. Enumeration
-> this cursor is used to access the elements of Collection only in forward direction

52
-> this is legacy cursor can be applied only for legacy classes like
Vector,Stack,Hashtable.
-> Enumeration is also an interface and we can not create object directly.
-> If we want to create an object for Enumeration we have to use a legacy method
called elements() method
# Creation of Enumeration:
Enumeration e = v.elements();

Here elements() method internally creates and returns an object of a class which
implements Enumeration interface.
# Methods
1. boolean hasMoreElements()
2. Object nextElement();

58. Enums in Java:-


-> Enum introduced in java 1.5v
-> Enum is a special data type in java
-> Enum data type is used to create pre-defined Constants
-> To declare constants using Enum we will use 'enum" keyword
-> Enum stands for Enumeration
enum WEEKDAYS {
MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY;
}
enum WEEKENDDAYS {
SATURDAY, SUNDAY;
}
-> When we want to declare pre-defined constants then we will use Enums
concept.
# Few Points To Remember Related To Enums
1) Enum constants we can't override
2) Enum doesn't support object creation
3) Enum can't extend classes
4) Enum can be created in seperate file or we can create in existing class also
package in.ashokit;
53
public enum Course {
JAVA, PYTHON, DEVOPS, AWS, DOCKER, KUBERNETES;
}

58. Performance of all Collection interface of Java.


1. List Interface
⦁ Implementations: ArrayList, LinkedList, Vector
a. ArrayList
⦁ Performance:
⦁ Random access: O(1)
⦁ Insertion/removal at end: O(1) (amortized)
⦁ Insertion/removal at middle: O(n)
⦁ Use Case: When frequent access and iteration are needed; resizing is
handled internally.
⦁ Pros: Fast random access, good cache performance.
⦁ Cons: Slow insertions/removals except at the end.
b. LinkedList
⦁ Performance:
⦁ Random access: O(n)
⦁ Insertion/removal at ends: O(1)
⦁ Insertion/removal at middle: O(n)
⦁ Use Case: When frequent insertions/deletions are needed, especially at
the beginning or end.
⦁ Pros: Efficient insertion/removal.
⦁ Cons: Slow random access, higher memory usage due to node storage.
c. Vector
⦁ Performance:
Similar to ArrayList.
⦁ Random access: O(1)
⦁ Insertion/removal at end: O(1) (amortized)

54
⦁ Insertion/removal at middle: O(n)
⦁ Use Case: When thread safety is required and legacy code is being
maintained.
⦁ Pros: Synchronized.
⦁ Cons: Slower than ArrayList due to synchronization overhead.
2. Set Interface
⦁ Implementations: HashSet, LinkedHashSet, TreeSet
a. HashSet
⦁ Performance:
⦁ Basic operations (add, remove, contains): O(1)
⦁ Use Case: When high-performance set operations are needed, without
requiring order.
⦁ Pros: Fast operations.
⦁ Cons: No ordering.
b. LinkedHashSet
⦁ Performance:
⦁ Basic operations: O(1)
⦁ Use Case: When iteration order needs to be predictable.
⦁ Pros: Maintains insertion order.
⦁ Cons: Slightly slower than HashSet due to maintaining a linked list.
c. TreeSet
⦁ Performance:
⦁ Basic operations: O(log n)
⦁ Use Case: When a sorted set is required.
⦁ Pros: Sorted order.
⦁ Cons: Slower than HashSet/LinkedHashSet.
3. Queue Interface
⦁ Implementations: LinkedList, PriorityQueue, ArrayDeque
a. LinkedList (as Queue)
⦁ Performance:
⦁ Offer, poll: O(1)
55
⦁ Use Case: When you need a simple FIFO queue.
⦁ Pros: Simple implementation.
⦁ Cons: Higher memory usage due to node storage.
b. PriorityQueue
⦁ Performance:
⦁ Offer, poll: O(log n)
⦁ Use Case: When you need elements sorted by priority.
⦁ Pros: Efficient priority handling.
⦁ Cons: No fixed size.
c. ArrayDeque
⦁ Performance:
⦁ Offer, poll: O(1) (amortized)
⦁ Use Case: When you need a double-ended queue with efficient
operations.
⦁ Pros: Efficient for both ends.
⦁ Cons: No random access.
4. Map Interface
⦁ Implementations: HashMap, LinkedHashMap, TreeMap
a. HashMap
⦁ Performance:
⦁ Basic operations (get, put): O(1)
⦁ Use Case: When you need fast access by key.
⦁ Pros: Fast operations.
⦁ Cons: No ordering.
b. LinkedHashMap
⦁ Performance:
⦁ Basic operations: O(1)
⦁ Use Case: When you need access order or insertion order iteration.
⦁ Pros: Maintains order.
⦁ Cons: Slightly slower than HashMap.
c. TreeMap
56
⦁ Performance:
⦁ Basic operations: O(log n)
⦁ Use Case: When you need a sorted map.
⦁ Pros: Sorted order.
⦁ Cons: Slower than HashMap.

59.Explain all about Concurrent Package.


The java.util.concurrent package in Java provides a framework for handling
concurrent programming and parallelism. This package contains classes and
interfaces to support multithreaded programming, making it easier to manage
tasks that run concurrently.. They provide thread-safe operations without the
need for explicit synchronization, improving performance and simplifying
concurrent programming.
# Main Components:-
1. Executors :- The Executors framework provides a high-level API for creating
and managing threads. It includes:

⦁ Executor: A simple interface that supports launching new tasks.

⦁ ExecutorService: A more complete service for managing lifecycle of threads,


including their termination.

⦁ ScheduledExecutorService: An ExecutorService that can schedule commands


to run after a given delay or to execute periodically.

⦁ Executors: A factory class providing methods to create different types of


executor services.

2. Concurrent Collections :- These collections are thread-safe and designed for


concurrent access without the need for external synchronization:

⦁ ConcurrentHashMap: A thread-safe variant of HashMap.

⦁ CopyOnWriteArrayList: A thread-safe variant of ArrayList where all mutative


operations (add, set, etc.) are implemented by making a fresh copy of the

57
underlying array.

⦁ CopyOnWriteArraySet: A Set that uses a CopyOnWriteArrayList for all of its


internal storage.

⦁ ConcurrentLinkedQueue: An unbounded thread-safe queue based on linked


nodes.

⦁ ConcurrentSkipListMap: A scalable concurrent ConcurrentNavigableMap


implementation.

⦁ ConcurrentSkipListSet: A scalable concurrent NavigableSet implementation.

3. Locks :- A framework for locking and synchronizing access to shared resources:


⦁ Lock: An interface for explicit locking mechanisms.

⦁ ReentrantLock: A reentrant mutual exclusion lock with the same basic


behavior and semantics as the implicit monitor lock accessed using
synchronized methods and statements.

⦁ ReentrantReadWriteLock: A pair of associated ReentrantLock objects, one


for read-only operations and one for write operations.

⦁ StampedLock: A capability-based lock with three modes for controlling


read/write access.

4. Synchronizers :- Utilities for managing control flow between threads:


⦁ CountDownLatch: A synchronization aid that allows one or more threads to
wait until a set of operations being performed in other threads completes.

⦁ CyclicBarrier: A synchronization aid that allows a set of threads to all wait for
each other to reach a common barrier point.

⦁ Semaphore: A counting semaphore.

⦁ Exchanger: A synchronization point at which threads can pair and swap


elements within pairs.

58
⦁ Phaser: A flexible barrier that is useful for implementing multi-phase
computations.

5. Atomic Variables :- Classes that support lock-free thread-safe programming


on single variables:

⦁ AtomicBoolean: A boolean value that may be updated atomically.

⦁ AtomicInteger: An int value that may be updated atomically.

⦁ AtomicLong: A long value that may be updated atomically.

⦁ AtomicReference: An object reference that may be updated atomically.

⦁ AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray: Atomic


arrays of the respective types.

6. Fork/Join Framework :- A framework designed for parallelism that


recursively splits tasks into smaller sub-tasks until they are simple enough to be
solved asynchronously:

⦁ ForkJoinPool: A specialized implementation of ExecutorService for running


ForkJoinTasks.

⦁ ForkJoinTask: A task that can be used with the ForkJoinPool.

⦁ RecursiveTask: A ForkJoinTask that returns a result.

⦁ RecursiveAction: A ForkJoinTask that doesn't return a result.

7. Thread Utilities :- Additional utilities for working with threads:


⦁ ThreadFactory: An interface for creating new threads on demand.

⦁ ThreadLocalRandom: A random number generator isolated to the current


thread.

8. CompletableFuture and Future :- Classes that represent a result of an


asynchronous computation:

59
⦁ Future: Represents the result of an asynchronous computation.

⦁ CompletableFuture: A Future that may be explicitly completed (normally or


exceptionally), and may be used as a CompletionStage, supporting
dependent functions and actions that trigger upon its completion.

# Advantages of Concurrent Collections


⦁ Thread-Safety: Designed to handle concurrent access without the need for
explicit synchronization, preventing data corruption and inconsistent state.

⦁ Performance: Optimized for high performance in multi-threaded


environments by reducing lock contention and improving scalability.

⦁ Ease of Use: Simplifies the development of concurrent applications by


providing ready-to-use, thread-safe collections

60. Describe a stack overflow condition and explain how it happens.


A StackOverflowError in Java occurs when a program runs out of space in the stack,
which is the memory area used for storing method calls, local variables, and the
state of a running thread.
Causes of a StackOverflowError:
⦁ Deep Recursion: The most common cause of a StackOverflowError is deep or
infinite recursion. Each time a method is called, a new stack frame is created,
and if the recursion is too deep (or infinite), it eventually exhausts the stack
memory.
⦁ Too Many Method Calls: A large number of method calls that consume more
stack space than allocated can also trigger this error.
How to Avoid StackOverflowError:
Proper Recursion: Ensure that recursion has a base case or stopping condition to
avoid infinite recursion.

61. What is OutOfMemoryError ?


An OutOfMemoryError in Java occurs when the Java Virtual Machine (JVM) runs
out of memory. This error is typically thrown when the JVM tries to allocate

60
memory for an object or array but is unable to do so because the available
memory has been exhausted.
Causes of OutOfMemoryError:
Memory Leaks:
⦁ If objects are no longer needed but still referenced, they won’t be garbage
collected, causing a gradual increase in memory usage.
Large Object Creation:
⦁ Trying to load large data sets or create excessively large objects/arrays can
cause an OutOfMemoryError if the heap space is too small.
Improper JVM Settings:
⦁ If the heap size or other memory settings are configured incorrectly, it might
lead to memory exhaustion.
Too Many Threads:
⦁ Each thread consumes memory (in stack space and other overhead), and if
there are too many threads, the memory may run out.
How to Resolve OutOfMemoryError:
⦁ Increase Heap Size:
⦁ Fix Memory Leaks:
⦁ Optimize Object Creation:
⦁ Use Weak References:
⦁ Optimize Threads:

62. What is Cloning and why We need it ?


1. Cloning in Java
cloning is the process of creating an exact copy of an object. The Cloneable
interface and the Object.clone() method are used to achieve cloning. There
are two types of cloning: shallow cloning and deep cloning.
2. Shallow Copy vs. Deep Copy:
Shallow Cloning: Copies the object but references to nested objects are
shared. Changes to the nested objects affect both the original and the cloned
object.
Example: person1 and person2 share the same Address object.
61
class Address {
String city;
Address(String city) {
this.city = city;
}
}
class Person implements Cloneable {
String name;
Address address;
Person(String name, Address address) {
this.name = name;
this.address = address;
}
// Shallow copy method
@Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); // Default shallow copy
}
}
public class ShallowCloningExample {
public static void main(String[] args) throws
CloneNotSupportedException {
Address address = new Address("New York");
Person person1 = new Person("John", address);
// Shallow cloning
Person person2 = (Person) person1.clone();
// Both person1 and person2 share the same address object
System.out.println(person1.address.city); // Output: New York
System.out.println(person2.address.city); // Output: New York
// Change city in person2's address
person2.address.city = "Los Angeles";
// Both person1 and person2 reflect the change because they
share the same address object
System.out.println(person1.address.city); // Output: Los
Angeles
System.out.println(person2.address.city); // Output: Los
Angeles
}
}
Deep Cloning: Copies the object and all nested objects, ensuring that the
original and cloned objects are completely independent.
Example: person1 and person2 have separate Address objects.
class Address implements Cloneable {
String city;
Address(String city) {
this.city = city;

62
}
@Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); // Deep copy of Address
}
}
class Person implements Cloneable {
String name;
Address address;
Person(String name, Address address) {
this.name = name;
this.address = address;
}
// Deep copy method
@Override
protected Object clone() throws CloneNotSupportedException {
Person cloned = (Person) super.clone();
cloned.address = (Address) address.clone(); // Deep copy of
the Address object
return cloned;
}
}
public class DeepCloningExample {
public static void main(String[] args) throws
CloneNotSupportedException {
Address address = new Address("New York");
Person person1 = new Person("John", address);
// Deep cloning
Person person2 = (Person) person1.clone();
// Both person1 and person2 have different address objects
System.out.println(person1.address.city); // Output: New York
System.out.println(person2.address.city); // Output: New York
// Change city in person2's address
person2.address.city = "Los Angeles";
// person1 and person2 are independent of each other
System.out.println(person1.address.city); // Output: New York
System.out.println(person2.address.city); // Output: Los
Angeles
}
}

63. What is Serialization and De-Serialization ?

63
63.1. Serialization :-
⦁ Serialization is the process of converting an object’s state to a byte stream.
This byte stream can then be saved to a file, sent over a network, or stored in
a database. The byte stream represents the object’s state, which can later be
reconstructed to create a new copy of the object.
⦁ Serialization allows us to save the data associated with an object and
recreate the object in a new location.
⦁ The ObjectOutputStream class contains writeObject() method for serializing
an Object.

# Serialization Formats :-
⦁ Many different formats can be used for serialization, such as JSON, XML, and
binary. JSON and XML are popular formats for serialization because they are
human-readable and can be easily parsed by other systems. Binary formats
are often used for performance reasons, as they’re typically faster to read
and write than text-based formats.

63.2. Deserialization :-
Deserialization is the reverse process of serialization. It involves taking a byte
stream and converting it back into an object. This is done using the appropriate
tools to parse the byte stream and create a new object.

In Java, the readObject() method of ObjectInputStream class can be used to

64
deserialize a binary format, and the Jackson library can be used to parse a JSON
format.

64. Points to remember :-


1. If a parent class has implemented Serializable interface then child class doesn’t
need to implement it but vice-versa is not true.
Q. What happens when a class is serializable, but its superclass is not?
1. Serialization: At the time of serialization, if any instance variable inherits
from the non-serializable superclass, then JVM ignores the original value of
that instance variable and saves the default value to the file.
2. De- Serialization: At the time of de-serialization, if any non-serializable
superclass is present, then JVM will execute instance control flow in the
superclass. To execute instance control flow in a class, JVM will always invoke
the default(no-arg) constructor of that class.

2. Only non-static data members are saved via Serialization process.


3. Static data members and transient data members are not saved via Serialization
process. So, if you don’t want to save value of a non-static data member then
make it transient.
4. Constructor of object is never called when an object is deserialized.
5. Associated objects must be implementing Serializable interface.

65. Why is custom serialization needed ?


During serialization, there may be data loss if we use the ‘transient’ keyword.
‘Transient’ keyword is used on the variables which we don’t want to serialize. But
sometimes, it is needed to serialize them in a different manner than the default
serialization (such as encrypting before serializing etc.), in that case, we have to
use custom serialization and deserialization.

65
In the above image example, before serialization, Account object can provide
proper username and password but deserialization of Account object provides only
username and not the password. This is due to declaring password variable as
transient.

Hence during default serialization, there may be a chance of loss of information


because of the transient keyword. To recover this loss, we will have to use
Customized Serialization.

Customized serialization can be implemented using the following two methods:


⦁ private void writeObject(ObjectOutputStream oos) throws Exception: This
method will be executed automatically by the jvm(also known as Callback
Methods) at the time of serialization. Hence to perform any activity during
serialization, it must be defined only in this method.

⦁ private void readObject(ObjectInputStream ois) throws Exception: This


method will be executed automatically by the jvm(also known as Callback
Methods) at the time of deserialization. Hence to perform any activity during
deserialization, it must be defined only in this method.

Note: While performing object serialization, we have to define the above two
methods in that class.

66
66. What is Serialization and Externalization ?
1. A serializable interface is used to implement serialization. An externalizable
interface used to implement Externalization.

2. Serializable is a marker interface i.e. it does not contain any method. The
externalizable interface is not a marker interface and thus it defines two
methods writeExternal() and readExternal().

3. Serializable interface passes the responsibility of serialization to JVM and the


programmer has no control over serialization, and it is a default algorithm.
The externalizable interface provides all serialization responsibilities to a
programmer and hence JVM has no control over serialization.

4. Using a serializable interface we save the total object to a file, and it is not
possible to save part of the object. Base on our requirements we can save
either the total object or part of the object.

67. What is the serialVersionUID?


When you save (serialize) an object to a file or send it over a network, Java

67
converts the object into a series of bytes. Later, when you want to read
(deserialize) that object back, Java needs to ensure that the class definition hasn't
changed. The serialVersionUID helps with this check.
If the serialVersionUID matches during deserialization, Java knows that the class
is compatible, and the object can be safely deserialized.
If the serialVersionUID doesn't match, Java throws an InvalidClassException,
indicating that the class has changed in a way that makes the object incompatible
with its previous version.
Declaration: You can declare your own serialVersionUID in a class like this:
private static final long serialVersionUID = 12345L;

68. 'transient' keyword in Java


In Java, the transient keyword is used to indicate that a field should not be
serialized when an object is converted into a byte stream. Serialization is the
process of converting an object's state to a format that can be stored or
transmitted and later reconstructed. By marking a field as transient, you tell the
Java serialization mechanism to skip this field when serializing the object.
NOTE:- To prevent sensitive data such as passwords or personal information
from being serialized and potentially exposed.

69. 'volatile' keyword in Java


When a variable is declared as volatile, it ensures that any read or write operation
on that variable is directly done on the main memory, not on the thread's local
cache.

This guarantees that a read operation always sees the most recent write operation
by any thread.

70. 'static' Keyword in Serialization


Fields marked as static are not included in the serialization process. This is because
static fields belong to the class itself rather than to any individual instance of the
class.
NOTE:- If you want to skip serialization for both instance and static fields, you can
use the transient keyword. A " static transient "field will also not be serialized.

68
71. What are Generics in java ?
⦁ Using Generics, we can write our classes / variable / methods which are
independent of data type
⦁ Generics are used to achieve type safety
⦁ Note: Before Generics was introduced, generalized classes, interfaces or
methods were created using references of type Object because Object is the
super class of all classes in Java, but this way of programming did not ensure
type safety
⦁ Note: This is also known as Diamond Notation of creating an object of
Generic type.

72. Deamon Thread


-> Daemon threads is a low priority thread that provide supports to user threads.
These threads can be user defined and system defined as well.
-> Garbage collection thread is one of the systems generated daemon thread that
runs in background.
-> When a JVM founds daemon threads it terminates the thread and then
shutdown itself, it does not care Daemon thread whether it is running or not.

73. Garbage Collection in Java


-> The garbage collector in Java is solely responsible for deleting un-used / un-
referenced objects.

-> In other languages, such as C or C++, the programmer is solely responsible for
creating and deleting objects. This may result in memory depletion if the
programmer forgets to dereference the objects.

-> In Java, programmers do not have to work on this. The JVM automatically
destroys objects which have lost their reference.

74. Now, what do we mean when we say “lost their reference”?


Let us say we have a class called human which has a constructor Human (String
name). This constructor initializes the current value of the string to the class
variable name.
Now if we create an object of the class Human as follows.
69
Human object = new Human(“Ashok”);
The JVM creates a reference of the name “object” and points it to the data Ashok.
Now if I write
object=null;

The pointer to the value “Ashok” is now nullified. I cannot access the value
anymore as there was only one reference pointing to it. This is unreachability of
objects in memory.
This object of human named “Ashok” is now eligible for garbage collection.
An object is eligible for garbage collection if there are no references to it in the
heap memory.
There are a few ways to make an object eligible for garbage collection. They are:
⦁ You can nullify the reference variable.
⦁ You can assign the same pointer to a different object.
⦁ All objects created inside a method lose their reference outside the method
and are thus eligible for garbage collection.
⦁ Using Island of Isolation.

75. What is Island Isolation


When two objects ‘a’, and ‘b’ reference each other, and they are not referenced by
any other object, it is known as island of isolation.

76. finalize() in Java


The finalize() method is called by the Garbage Collector when there are no more
references to the object. Thus, finalize() is called just before an object is garbage
collected.

77. Finalization in Java


As soon as the garbage collector clears the object, the compiler executes the
finalize() method.

This method generally contains actions which the JVM performs just before the
object gets deleted.

The Object class contains the finalize() method. Remember to override the finalize

70
method in the class whose objects will be garbage collected.

The finalize method throws a checked exception called “Throwable”.

78. Methods for calling the Garbage Collector in Java


There are 2 ways to call the garbage collector in java
1. We can use the Runtime.getRuntime().gc() method- This class allows the
program to interface with the Java Virtual machine. The “gc()” method allows
us to call the garbage collector method.
2. We can also use the System.gc() method which is common.
Note: However, you cannot guarantee that these statements would run the
garbage collector. This is because the Java Virtual Machine performs clean-up.

79. How Garbage Collection works internally


In java, Garbage Collection works in 3 phases

Phase-1 : Mark objects as alive


⦁ In this step, the GC identifies all the live objects in memory by traversing the
object graph.

⦁ When GC visits an object, it marks it as accessible and thus alive. Every object
the garbage collector visits is marked as alive. All the objects which are not
reachable from GC Roots are garbage and considered as candidates for
garbage collection.

Phase-2 : Sweep dead objects


⦁ After marking phase, we have the memory space which is occupied by live
(visited) and dead (unvisited) objects. The sweep phase releases the memory
fragments which contain these dead objects.

Phase-3 : Compact remaining objects in memory


⦁ The dead objects that were removed during the sweep phase may not
necessarily be next to each other. Thus, you can end up having fragmented
memory space.

⦁ Memory can be compacted after the garbage collector deletes the dead
objects, so that the remaining objects are in a contiguous block at the start of
71
the heap.

80. What is the difference between extending Thread class and


implementing Runnable interface.
If we create any thread by extending Thread class then we have no chance for
extending from any other class.

But if we create any thread by implementing Runnable interface then we have a


chance for extending from any one class.

It is always recommended to create the user defined threads by implementing


Runnable interface only

81. What is the difference between calling t.run() and t.start() ?


We can call run() method directly but no thread will be created / registered with
Thread scheduler but run()method executes like a normal method by Main Thread.

But if we call start() method thread will be registered with thread scheduler and it
calls run() method.
class MyThread implements Runnable {
public void run() {
Thread t = Thread.currentThread();
for (int i = 1; i <= 5; i++) {
System.out.println(t.getName() + "Thread
Value" + i);
}
}
public static void main(String args[]) {
MyThread mt = new MyThread();
Thread t = new Thread(mt);
t.start();
//t.run();
}
}

Note: When we call start ( ) method it is creating thread and printing output

72
like below
Thread-0 Thread Value:1
Thread-0 Thread Value:2
Thread-0 Thread Value:3
Thread-0 Thread Value:4
Thread-0 Thread Value:5
Note: When we call run ( ) method it is not creating thread and printing
output like below with main thread
main Thread Value:1
main Thread Value:2
main Thread Value:3
main Thread Value:4
main Thread Value:5

82. Which is more preferred - Synchronized method or Synchronized


block.
In Java, synchronized keyword causes a performance cost.
A synchronized method in Java is very slow and can degrade performance. So we
must use synchronization keyword in java when it is necessary else, we should use
Java synchronized block that is used for synchronizing critical section only.

83. Deadlock
When we execute multiple Threads which are acting on same object that is
synchronized at the same time simultaneously then there is another problem may
occur called deadlock.

Dead lock may occur if one thread holding resource1 and waiting for resource2
release by the Thread2, at the same time Thread2 is holding on resource2 and
waiting for the resource1 released by the Thread1 in this case 2 Threads are
continuously waiting and no thread will execute this situation is called as deadlock.

To resolve this deadlock situation there is no any concept in java, programmer only
responsible for writing the proper logic to resolve the problem of deadlock.

84. What is race condition?

73
A race condition occurs when two or more threads access shared resources
concurrently, and the final outcome depends on the timing or order of their
execution. This leads to unpredictable and inconsistent behavior.

85. Methods of Object class which are related to threads


1. wait(): This method used to make the particular Thread wait until it gets a
notification.

2. notify(): This method used to send the notification to one of the waiting thread
so that thread enter into running state and execute the remaining task.

3. notifyAll(): This method used to send the notification to all the waiting threads
so that all thread enter into running state and execute simultaneously.

-> All these 3 methods are available in Object class which is super most class so
that we can access all these 3 methods in any class directly without any reference.

-> These methods are mainly used to perform Inner thread communication

86. Life Cycle of a Thread .


New: A thread begins its life cycle in the new state. Thread remains in the new
state until we will call start ( ) method.

When we call start ( ) method then Thread Schedular will start its operation.
1) Allocating Resources
2) Thread Scheduling
3) Thread Execution by calling run ( ) method

Runnable : After calling start ( ) method, thread comes from new state to runnable
state.

Running : A thread comes to running state when Thread Schedular will pick up that
thread for execution.

Blocked : A thread is in waiting state if it waits for another thread to complete its
task.

74
Terminated : A thread enters into terminated state once it completes its task.

87. join ( ) method, yield ( ) method


join ( ) :- join ( ) method is used to hold second thread execution until first thread
execution got completed.
t1.start();
t1.join();
t2.start();

yield ( ) :- yield ( ) method is used to give chance for other equal priority threads to
execute.

class Producer extends Thread {


public void run() {
for (int i = 0; i < 3; i++) {
System.out.println("Producer : Produced Item " + i);
Thread.yield();
}

88. What will happen if a synchronized method is called by two threads


on different object instances simultaneously?
When two threads call a synchronized method on different object instances
simultaneously, the synchronized keyword will not prevent them from executing
the method concurrently. This is because synchronization in Java works on the
basis of object-level locks.

Synchronization only ensures thread-safety for the same object instance.

To synchronize access across all instances, use static synchronized methods or an


external shared lock (e.g., synchronized(SomeClass.class) for class-level locking).

89. Explain features of Spring Data JPA?


Spring Data JPA offers features such as automatic repository creation, query
method generation, pagination support, and support for custom queries. It
provides a set of powerful CRUD methods out-of-the-box, simplifies the
implementation of JPA repositories, and supports integration with other Spring

75
projects like Spring Boot and Spring MVC.

90. What is Spring Data JPA?


Spring Data JPA is part of the Spring Data project, which aims to simplify data
access in Spring-based applications. It provides a layer of abstraction on top of JPA
(Java Persistence API) to reduce boilerplate code and simplify database operations,
allowing developers to focus more on business logic rather than database
interaction details.

91. Difference between findById() and getOne().


findById() returns an Optional containing the entity with the given ID, fetching it
from the database immediately. getOne() returns a proxy for the entity with the
given ID, allowing lazy loading of its state. If the entity is not found, getOne()
throws an EntityNotFoundException.

92. Use of @Temporal annotation.


The @Temporal annotation is used to specify the type of temporal data (date,
time, or timestamp) to be stored in a database column. It is typically applied to
fields of type java.util.Date or java.util.Calendar to specify whether they should be
treated as DATE, TIME, or TIMESTAMP.

93. Write a query method for sorting in Spring Data JPA.


We can specify sorting in query methods by adding the OrderBy keyword followed
by the entity attribute and the sorting direction (ASC or DESC). For example:
List<User> findByOrderByLastNameAsc();

94. Explain @Transactional annotation in Spring.


It ensures that the annotated method runs within a transaction context, allowing
multiple database operations to be treated as a single atomic unit. If an exception
occurs, the transaction will be rolled back, reverting all changes made within the
transaction.
A. Propagation in @Transactional
Propagation defines how transaction boundaries are handled when calling another
transactional method.

76
B. Isolation in @Transactional
Isolation defines how transactions interact with each other, mainly dealing with
concurrency issues.

95. What is the difference between FetchType.Eager and


FetchType.Lazy?
FetchType.Eager specifies that the related entities should be fetched eagerly along
with the main entity, potentially leading to performance issues due to loading
unnecessary data. FetchType.Lazy specifies that the related entities should be
fetched lazily on demand, improving performance by loading them only when
needed.

96. What are the rules to follow to declare custom methods in


Repository.
Custom methods in a repository interface must follow a specific naming
convention to be automatically implemented by Spring Data JPA. The method
name should start with a prefix such as findBy, deleteBy, or countBy, followed by
77
the property names of the entity and optional keywords like And, Or, OrderBy, etc.

97. What is pagination and how to implement pagination in spring


data?
Pagination is a technique used to divide large result sets into smaller, manageable
chunks called pages. In Spring Data, pagination can be implemented using
Pageable as a method parameter in repository query methods. Spring Data
automatically handles the pagination details, allowing you to specify the page
number, page size, sorting, etc.

159. How can you implement pagination in a springboot application?


To implement pagination in a Spring Boot application, I use Spring Data JPA's
Pageable interface.

In the repository layer, I modify my query methods to accept a Pageable object as


a parameter.When calling these methods from my service layer, I create an
instance of PageRequest, specifying the page number and page size I want.

This PageRequest is then passed to the repository method. Spring Data JPA
handles the pagination logic automatically, returning a Page object that contains
the requested page of data along with useful information like total pages and total
elements. This approach allows me to efficiently manage large datasets by
retrieving only a subset of data at a time.

98. Explain few CrudRepository methods.


Some commonly used methods in CrudRepository include save() to save or update
entities, findById() to find entities by their primary key, deleteById() to delete
entities by their primary key, findAll() to retrieve all entities, and count() to count
the number of entities.

99. Explain these term Hibernate,JPA and ORM


Hibernate is a specific implementation of the Java Persistence API (JPA). JPA is a
specification for ORM, while Hibernate provides one of the implementations that
maintain the JPA specification.

Hibernate: Is a ORM framework and It map Java objects to relational database

78
tables.

JPA: Java Persistence API, Is a Specification which provide standard API to persist
Java objects into relational databases.

ORM: Object-Relational Mapping, is a technique which integrate object-oriented


programming languages to relational databases.

In teeno ko use karke aap easily apne Java objects ko database mein store,
retrieve, update aur delete kar sakte hain bina manually SQL queries likhe.

100. What are the core components of Hibernate?


Core components of Hibernate include SessionFactory, Session, Transaction,
ConnectionProvider, and TransactionFactory. These components are fundamental
in performing database operations through Hibernate framework.

101. Explain the role of the SessionFactory in Hibernate.


SessionFactory is a factory class used to create Session objects. It is a heavyweight
object meant to be created once per datasource or per database. It is used to open
new sessions for interacting with the database.

102. What is a Session in Hibernate?


A Session in Hibernate is a single-threaded, short-lived object representing a
conversation between the application and the database. It acts as a staging area
for changes to be persisted in the database.

103. How does Hibernate manage transactions?


Hibernate manages transactions via its Transaction interface. Transactions in
Hibernate are handled through a combination of the Java Transaction API (JTA)
and JDBC. Hibernate integrates with the transaction management mechanism of
the underlying platform.

104. What are the differences between get() and load() methods in
Hibernate?
The get() method in Hibernate retrieves the object if it exists in the database;
otherwise, it returns null. The load() method also retrieves the object, but if it
doesn’t exist, it throws an ObjectNotFoundException. load() can use a proxy to
79
fetch the data lazily.

105. What is the N+1 SELECT problem in Hibernate? How can it be


prevented?
The N+1 SELECT problem in Hibernate occurs when an application makes one
query to retrieve N parent records and then makes N additional queries to retrieve
related child objects. It can be prevented using strategies like join fetching, batch
fetching, or subselect fetching to minimize the number of queries executed.

106. How to avoid N+1 problem ?


We can used @EntityGraph annotation to avoid it.

@EntityGraph :- In Spring Data JPA, @EntityGraph is used to define a graph of


entities to be eagerly fetched in a query. It allows you to specify which related
entities should be loaded in a single query, helping to avoid the "N+1 query
problem" by specifying relationships to be fetched in a single fetch operation.

How to Use @EntityGraph:


Defining the Entity Graph: It can be applied on a repository method to load
related entities or attributes.
@EntityGraph(attributePaths = {"rooms"})
List<Hotel> findAll(); // Rooms will be eagerly fetched along with Hotels.

107. Explain the role of the @Entity annotation in Hibernate.


The @Entity annotation in Hibernate is used to mark a class as an entity, which
means it is a mapped object and its instance can be persisted to the database.

108. What is cascading in Hibernate?


Cascading in Hibernate is the ability to propagate the operations from a parent
entity to its associated child entities. It is used to manage the state transitions of
associated objects automatically. CascadeType can be used to specify which
operations are cascaded.

109. How can you achieve concurrency in Hibernate?


Concurrency in Hibernate can be achieved using versioning and locking
80
mechanisms. Hibernate supports optimistic and pessimistic locking strategies to
handle concurrent modifications of data effectively.

110. What is an optimistic locking in Hibernate?


Optimistic locking in Hibernate is a technique to ensure that a record is not
updated by more than one transaction at the same time by using a version field in
the database table. It checks the version of a record at the time of fetching and
before committing an update to ensure consistency.
Example:- flight booking, hotel booking, train ticket booking
Preferred when:
1. Concurrency is low.
2. Conflicts are rare (low-conflict scenario).
3. High-performance systems where constant locking could degrade database
performance.

Drawback:
1. In high concurrency scenarios, frequent retries due to version number
updates can lead to significant performance degradation.

111. What is an Pessimistic locking in Hibernate?


Pessimistic locking in Hibernate is a technique to prevent multiple transactions
from concurrently modifying the same record by locking the record in the
database.
When a record is accessed, a lock is applied, ensuring that no other transaction can
update or delete the record until the current transaction completes, guaranteeing
data consistency.
It is particularly useful in high-conflict scenarios where multiple transactions are
likely to compete for the same data.
Example:- Movies Ticket Booking
Preferred when:
1. Data contention is high.
2. Consistency is more critical than performance.

Drawback:

81
1. Can lead to reduced performance due to blocking or waiting.

112. Different States of Hibernate


112.1. Transient State
⦁ Definition: An object is in the transient state when it is instantiated but not
associated with any Hibernate session and not saved to the database.
⦁ Characteristics:
⦁ Not associated with any database row.
⦁ Changes to the object are not tracked or persisted by Hibernate.

112.2. Persistent State


⦁ Definition: An object is in the persistent state when it is associated with a
Hibernate session and its lifecycle is managed by the session.
⦁ Characteristics:
⦁ Mapped to a database row.
⦁ Changes to the object are automatically tracked and synchronized with
the database when the session is flushed.
112.3. Detached State
⦁ Definition: An object is in the detached state when it was previously
associated with a Hibernate session (and possibly persisted) but is no longer
associated with any session.
⦁ Characteristics:
⦁ Changes to the object are not tracked by Hibernate unless reattached to a
session.
⦁ Can be reattached to a new session to become persistent again.

112.4. Removed State


⦁ Definition: An object is in the removed state when it is marked for deletion
from the database within a session.
⦁ Characteristics:
⦁ The object will be deleted from the database upon session flush or
transaction commit.

82
⦁ It remains in the session but is scheduled for removal.

113.How to Implement Second-Level Cache in Spring Boot with


Hibernate
Hibernate Second-Level Cache application-wide cache hota hai jo multiple sessions
ke beech data ko share karta hai. Isse application performance improve hoti hai
kyunki baar-baar database hit karne ki zarurat nahi padti. Yeh cache entities,
collections, and query results ko store kar sakta hai.

Step 1: Add Dependencies{hibernate-ehcache ,ehcache }

Step 2: Configure Hibernate Caching{


spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.region.factory_class=org.hi
bernate.cache.jcache.JCacheRegionFactory
spring.jpa.properties.javax.cache.provider=org.ehcache.jsr107.Ehc
acheCachingProvider
spring.jpa.properties.javax.cache.uri=classpath:ehcache.xml
spring.jpa.properties.hibernate.cache.use_query_cache=true
}

Step 3: Create Ehcache Configuration

Step 4: Annotate Entities ( Entities ko caching ke liye annotate karein) :


@Entity
@Cacheable
@org.hibernate.annotations.Cache(usage =
CacheConcurrencyStrategy.READ_WRITE)
public class MyEntity {
@Id
private Long id;
private String name;
// getters and setters
}

Step 5: Enable Caching in Spring Boot Application


@SpringBootApplication
@EnableCaching
public class MyApplication {

83
public static void main(String[] args) {

Step 6: Use the Cache in Your Repositories

Your repository interfaces can be used as usual, and the second-level


cache will automatically work to cache entities and queries.

114. What is Dilect


Dialect ka matlab hota hai kisi specific database ke liye SQL queries likhne ka
tareeka. Spring Boot aur Hibernate mein, dialect batata hai ki kaunsa database
use ho raha hai taaki Hibernate us database ke liye sahi SQL queries generate kar
sake.
Example ke liye,

1.agar aap MySQL use kar rahe hain, to aapko Hibernate ko MySQLDialect
batana hoga: spring.jpa.properties.hibernate.dialect =
org.hibernate.dialect.MySQL5Dialect

2.Isi tarah agar aap PostgreSQL use kar rahe hain, to aapko PostgreSQLDialect
specify karna hoga: spring.jpa.properties.hibernate.dialect =
org.hibernate.dialect.PostgreSQLDialect

115. Difference between PATCH and PUT?


PATCH and PUT are both HTTP methods used for updating resources on a server,
but they have different semantics:
PUT (Update):
⦁ The PUT method is used to update an existing resource or create a new one
if it doesn't exist.
⦁ When we make a PUT request, We have to send the entire updated
representation of the resource to the server.
⦁ If the resource exists, the server replaces it with the new representation sent
in the request.
⦁ If the resource doesn't exist, the server typically creates it with the provided
representation.
PATCH (Partial Update):
⦁ The PATCH method is used to partially update an existing resource.

84
⦁ When you make a PATCH request, you send only the parts of the resource
that you want to update, rather than the entire representation.
⦁ The server applies the partial update to the resource, modifying only the
specified fields or properties.
⦁ PATCH is useful when you want to make small changes to a resource without
having to send the entire representation, which can be more efficient in
some cases.
In summary, PUT is used for full updates, while PATCH is used for partial updates.
The choice between PUT and PATCH depends on the specific use case and the
desired behavior for updating the resource.

116. What is Idempotent Method


Idempotent Methods: These are methods that can be called multiple times
without different outcomes. Common idempotent HTTP methods include:
GET: Fetches a resource. Multiple identical requests will result in the same
response and no side effects.

PUT: Replaces a resource. Multiple identical requests will result in the resource
being updated to the same state.

DELETE: Removes a resource. Multiple identical requests will result in the resource
being deleted (if it exists), and subsequent requests will have no additional effect.

HEAD: Similar to GET but without the response body. Multiple identical requests
will yield the same metadata.

OPTIONS: Returns the supported HTTP methods. Multiple identical requests will
result in the same response.

Non-Idempotent Method: {POST & PATCH}

POST: Typically used to create a resource. Multiple identical POST requests can
result in multiple resources being created, which means the outcome can change
with each request.

# Achieving Idempotency in POST Requests:

85
⦁ Idempotency is crucial for building fault-tolerant APIs.
⦁ To prevent issues like duplicate payments due to network failures or
timeouts, you can make POST requests idempotent by using an
Idempotency-Key.
⦁ The server checks if the Idempotency-Key exists in the request headers:
⦁ If the key is found, the server returns the cached response, avoiding
duplicate processing.
⦁ If not, the server processes the request and stores the response associated
with that key.

Implementation Notes: The Idempotency-Key can be stored in any storage system,


and you may set an expiration time for the key, like 24 hours.

Conclusion: Idempotency is essential for creating reliable APIs, particularly in


payment-related scenarios, to avoid duplicate actions.

117. Database Indexing


⦁ Indexing improves database performance by minimizing the number of disc
visits required to fulfill a query. It is a data structure technique used to
locate and quickly access data in databases.
⦁ sometimes we need to be able to quickly lookup data that is not stored as a
key. For example, we may need to quickly lookup customers by telephone
number. It would not be a good idea to use a unique constraint because we
can have multiple customers with the same phone number. In these cases,
we can create our own indexes.
# Trade-offs of Using Indexes
⦁ Storage Space: Indexes require additional storage space. The more indexes
you have, the more space you will need.
⦁ Impact on Write Performance: Because indexes need to be updated with
every write operation, they can slow down insert, update, and delete
operations.

# Here are a few rules to help you decide which indexes to create:

86
⦁ If your record retrievals are based on one field at a time (for example,
dept='D101'), create an index on these fields.
⦁ If your record retrievals are based on a combination of fields, look at the
combinations.
⦁ If the comparison operator for the conditions is AND (for example, CITY =
'Raleigh' AND STATE = 'NC'), then build a concatenated index on the CITY
and STATE fields. This index is also useful for retrieving records based on
the CITY field.
⦁ If the comparison operator is OR (for example, DEPT = 'D101' OR
HIRE_DATE > {01/30/89}), an index does not help performance.
Therefore, you need not create one.
⦁ If the retrieval conditions contain both AND and OR comparison
operators, you can use an index if the OR conditions are grouped. For
example:
dept = 'D101' AND (hire_date > {01/30/89} OR exempt = 1)
⦁ In this case, an index on the DEPT field improves performance.

⦁ If the AND conditions are grouped, an index does not improve performance.
For example:
(dept = 'D101' AND hire_date) > {01/30/89}) OR exempt = 1

# Improving join performance


When joining database tables, index tables can greatly improve performance.
Unless the proper indexes are available, queries that use joins can take a long
time.
Assume you have the following Select statement:
SELECT * FROM dept, emp WHERE dept.dept_id = emp.dept_id

In this example, the DEPT and EMP database tables are being joined using the
department ID field. When the driver executes a query that contains a join, it
processes the tables from left to right and uses an index on the second table's join
field (the DEPT field of the EMP table).

87
To improve join performance, you need an index on the join field of the second
table in the From clause.
If there is a third table in the From clause, the driver also uses an index on the
field in the third table that joins it to any previous table. For example:
SELECT * FROM dept, emp, addr WHERE dept.dept_id = emp.dept AND
emp.loc = addr.loc
In this case, you should have an index on the EMP.DEPT field and the ADDR.LOC
field.
# Example:-
CREATE TABLE Customer (
CustomerID int PRIMARY KEY,
Name varchar(255),
Address varchar(255),
Email varchar(255)
);
Suppose you frequently search for customers by email. Without an index, the
database would have to scan every row to find the customer. You can create an
index on the Email column to speed up these searches:
CREATE INDEX idx_customer_email ON Customer (Email);

Now, when you search for a customer by email:


SELECT * FROM Customer WHERE Email = 'example@example.com';

The database will use the idx_customer_email index to quickly locate the row(s)
that match the email, significantly speeding up the query.

118. How to connect Two Database in spring boot ?


First we have to add the dependencies of Both DataBase like (MYSQL,
MONGODB).
Then Define the Properties of each Databse in application.properties file or
Application.yml.
Then We have to Create the Configuration Class for both the Databases . In this
Configuration class we have to define the DataSource, EntityManagerFactory,
TransactionManager. This way we can configure two databases.

⦁ DataSource: Responsible for setting up the connection to the database with the

88
necessary configuration details.
⦁ EntityManagerFactory: Manages the JPA entities and provides EntityManager
instances to interact with the persistence context.
⦁ TransactionManager: Manages transactions to ensure data consistency and
integrity.

119. How to make multiple Primary Key ( Composite Key ) in


Hibernate ?
For Making Composite Key, We have to create a class with these fields And
Implement Serializable interface then Mark that class with @Embeddable , In
Entity class use @EmbeddedId on this field.

1. @Embeddable: Marks a class as embeddable to be used as a composite key.


@Embeddable
public class CompositeKey implements Serializable {

private static final long serialVersionUID = 1L;

private String keyPart1;


private String keyPart2;
}

2. @EmbeddedId: Marks the field in the entity class that represents the
composite key.
@Entity
@Table(name = "your_table_name")
public class YourEntity {

@EmbeddedId
private CompositeKey id;
private String someOtherField;
}

120. How to write native query and custom query in spring data jpa.
In Spring Data JPA, you can write native queries and custom queries using the
@Query annotation.

Native Queries :- Native queries allow you to write raw SQL queries directly, giving

89
you more control over the database operations.
@Query(value = "SELECT * FROM Customer WHERE email LIKE %:domain",
nativeQuery = true)
List<Customer> findByEmailDomain(@Param("domain") String domain);

Custom Queries :- Custom queries are written using JPQL (Java Persistence Query
Language), which is similar to SQL but operates on the entity objects rather than
database tables.
@Query("SELECT c FROM Customer c WHERE c.lastName = :lastName")
List<Customer> findByLastName(@Param("lastName") String lastName);

121. TYPES OF STATEMENT IN SQL


1. DDL (Data Definition Language): DDL is used to define and modify the database
schema. It includes the following commands:
1. CREATE: Used to create tables, views, indexes, and other database objects.

2. ALTER: Used to modify the structure of existing database objects, such as


adding, modifying, or deleting columns in a table.

3. DROP: Used to delete database objects like tables, views, or indexes.

4. Truncate: Clears all data from a table while retaining its structure, often
faster than DELETE.
5. Rename: Used to change the name of database objects like tables, columns,
or indexes.

DDL commands are used to define or modify the structure of the database and are
primarily used for schema changes.

2. DML(Data Manipulation Language): DML is used to manipulate the data within


a database. It includes the following commands:
1. INSERT: Used to add new data to a table.
2. SELECT: Used to retrieve data from the database.
3. UPDATE: Used to modify existing data in a table.
4. DELETE: Used to remove data from a table.
DML commands are used to add, retrieve, modify, and delete data within the
90
database.
3. DCL(Data Control Language) : Used to manage permissions and access to
database objects.
1. GRANT : gives specific privileges to users or roles.
2. REVOKE : removes those privileges.

4.TCL(Transaction Control Language): Used to manage database transactions.


1. COMMIT : ensures changes are saved permanently.
2. ROLLBACK : undoes changes since the last COMMIT.
3. SAVEPOINT : allows setting a rollback point within a transaction.

122. What is the ORDER of Execution in SQL


FROM > WHERE > GROUP BY > HAVING > SELECT > ORDER BY > LIMIT

123. SQL Constraints :-


1. NOT NULL : Ensures that a column cannot store NULL values.
2. UNIQUE : Ensures all values in a column are distinct (no duplicates).
3. PRIMARY KEY : Combines NOT NULL and UNIQUE to uniquely identify each
row in a table.
4. FOREIGN KEY : Ensures data integrity by creating a relationship between two
tables (referential integrity).
5. CHECK : Ensures that all values in a column satisfy a specific condition.
6. DEFAULT : Provides a default value for a column if no value is specified
during insertion.
7. INDEX : Improves query performance by allowing faster retrieval of rows.

124. Joins In SQL


1. INNER JOIN : Retrieves only the rows that have matching values in both
tables.
2. LEFT JOIN : Retrieves all rows from the left table and matching rows from
the right table. Non-matching rows from the right table are NULL.
3. RIGHT JOIN : Retrieves all rows from the right table and matching rows from
the left table. Non-matching rows from the left table are NULL.
4. FULL OUTER JOIN : Retrieves all rows when there is a match in either the left

91
or right table. Non-matching rows in either table are filled with NULL.
5. CROSS JOIN : Produces a Cartesian product, combining all rows from both
tables.
6. SELF JOIN : Joins a table to itself, useful for hierarchical or relationship-based
queries.

125. Differance between UNION and UNION ALL in SQL.


UNION :-
⦁ It Removes duplicate rows from the result.
⦁ It is Slower because it checks for duplicates and removes them.
⦁ It is Used when you want to combine results and remove duplicates.
UNION ALL :-
⦁ It Includes all rows, even duplicates.
⦁ It is Faster as it does not check for duplicates.
⦁ It is Used when you want to combine results without worrying about

126. What is normalization ?


Normalization is the process of organizing a database to minimize redundancy and
improve data integrity.
Normalization is achieved through a series of "normal forms" (rules or guidelines),
each building upon the previous one.
1. 1NF (First Normal Form) : Ensures the table has no repeating groups or
arrays. Each column must have atomic (indivisible) values.
2. 2NF (Second Normal Form) : Ensures that all non-key columns are fully
dependent on the primary key (no partial dependencies).
3. 3NF (Third Normal Form) : Ensures no transitive dependencies (non-key
columns depend only on the primary key).

# Explanation with Examples


1. 1NF (First Normal Form)
A table is in 1NF if:
1. Each cell contains a single value.
2. Each row is unique.

92
2. 2NF (Second Normal Form)
A table is in 2NF if:
1. It is in 1NF.
2. All non-key attributes are fully dependent on the primary key.

3. 3NF (Third Normal Form)


93
A table is in 3NF if:
1. It is in 2NF.
2. There are no transitive dependencies (non-key attributes depending on other
non-key attributes).

Example: Non-3NF table

127. How many Aggregate Functions are available there in SQL ?


1. COUNT() : Returns the number of rows (or non-NULL values in a column).
2. SUM() : Returns the total sum of a numeric column.
3. AVG() : Returns the average (mean) value of a numeric column.
4. MIN() : Returns the smallest value in a column.
5. MAX() : Returns the largest value in a column.

128. What is a Composite Primary Key?


A composite primary key is a primary key that consists of two or more columns in a
table, rather than just one column. It is used when a single column is not sufficient
to uniquely identify records in a table.

94
129. What is ACID Properties ?
1. Atomicity : Ensures the transaction is all-or-nothing.
2. Consistency : Ensures the database remains in a valid state before and after a
transaction.
3. Isolation : Ensures that transactions are executed independently, even if they
run concurrently.
4. Durability : Ensures that committed transactions are permanent, even in the
event of system failure.

130. What is Windows Function in SQL ?


Window functions in SQL let us perform calculations on a group of rows while still
showing each row separately. They allow us to look at other rows in the same
result set without grouping them into one value.

# Types of Window Functions

1. Ranking Functions
⦁ ROW_NUMBER()
⦁ RANK()
⦁ DENSE_RANK()
⦁ NTILE(n)
2. Aggregate Functions as Window Functions
⦁ SUM()
⦁ AVG()
⦁ COUNT()
⦁ MIN()
⦁ MAX()
3. Value Functions
⦁ LAG()
⦁ LEAD()
⦁ FIRST_VALUE()
⦁ LAST_VALUE()

95
Syntax
SELECT column_name,
window_function() OVER (
PARTITION BY partition_column
ORDER BY order_column
) AS alias
FROM table_name;

PARTITION BY: Divides the result set into partitions (optional).


ORDER BY: Defines the order of rows within each partition.
OVER(): Defines the window frame.

131. What is Trigger in SQL ?


A trigger is a special type of stored procedure in SQL that is automatically executed
or "triggered" when a specific event occurs on a particular table . Triggers are used
to enforce business rules, validate data, maintain data integrity, or perform audit
operations.
A trigger is associated with a specific DML event, such as an INSERT, UPDATE, or
DELETE operation, and can be set to execute before or after the event occurs
96
Types of Triggers
1. BEFORE Trigger: This type of trigger is executed before the triggering event
(e.g., before an INSERT, UPDATE, or DELETE).
2. AFTER Trigger: This type of trigger is executed after the triggering event.
3. INSTEAD OF Trigger: This trigger replaces the actual operation with the
trigger code. For example, instead of performing an INSERT, an INSTEAD OF
trigger can perform another operation like an UPDATE.

132. Tell me all about Ioc Container


The IoC (Inversion of Control) container is a core component of the Java Spring
Framework. This container manages dependencies and It is responsible for
creating, configuring, and managing objects. Its main purpose is to help in writting
loosely coupled code in the application. Instead of objects managing their own
dependencies, the container injects these dependencies.

Beans ko define karne ke liye @Component, @Service, @Repository, aur


@Controller , @Bean annotations use hote hain.

Dependencies ko inject karne ke liye @Autowired annotation use hota hai.

Key Concepts of IoC Container


1. Dependency Injection (DI) : Dependency Injection refers to injecting the
dependencies of an object into the class. Through DI, you give the control of the
object’s dependencies to the container.
There are three types of DI:
1. Constructor Injection
2. Setter Injection
3. Field Injection
2. Bean : The Object managed by Spring IoC container is called as Beans.
3. Bean Configuration
There are two ways to configure beans:
1. XML Configuration
2. Java-based Configuration (Annotations)

97
# How IoC Container Works (Bean Life Cycle) :-
1. Initialization: When the application starts, the Spring IoC container initializes.
2. Bean Creation: The container creates the beans.
3. Dependency Injection: The container injects the dependencies of the beans.
4. Bean Management: The container manages the beans and controls their
lifecycle.
5. Bean Post-Processing (Optional): If the container has any
BeanPostProcessors defined, they are applied to the bean before and after
initialization. @PostConstruct or a custom init-method are executed here.
6. Bean Ready for Use: After all the previous steps, the bean is fully initialized
and ready for use by the application.
7. Destruction: When the Spring container is destroyed (for example, during
application shutdown), it calls the bean's destruction callback (e.g.,
@PreDestroy or destroy-method).

# Types of IoC Containers


BeanFactory: A basic IoC container that provides lazy initialization.
ApplicationContext: An advanced container that provides eager initialization and
additional features like event propagation, declarative mechanisms to create a
bean, and more.

# Advantages of Using IoC Container


Loose Coupling: It avoids tight coupling between classes.
Improved Testability: Dependencies can be easily mocked or stubbed, which aids
in unit testing.
Better Code Management: Dependencies can be managed from a central place.
Configuration Flexibility: It provides flexibility in configuration through XML,
annotations, or Java code.

133. Tell me all about Spring Bean Scope


In the Spring Framework, a "bean" is an object that is managed by the IoC
container. The scope of a bean defines the lifecycle and visibility of that bean

98
within the application.

Yeh scopes humein flexibility dete hain ki hum apni application ke needs ke hisaab
se beans ko configure kar sakein.

Common Bean Scopes in Spring

1. Singleton (Default)
Singleton Spring ka default scope hai. Is scope mein, container ek hi instance
create karta hai bean ka, aur woh instance application context ke saath rehta
hai.

2. Prototype
@Scope("prototype")
Prototype scope mein, container har baar ek new instance create karta hai
jab bhi bean ko request kiya jata hai.

3. Request
@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode =
ScopedProxyMode.TARGET_CLASS)
Request scope Spring MVC applications ke liye useful hai. Is scope mein, ek
HTTP request ke lifecycle ke dauran bean ka ek hi instance create hota hai.

4. Session
@Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode =
ScopedProxyMode.TARGET_CLASS)
Session scope mein, bean ka ek instance ek HTTP session ke dauran create
hota hai aur us session ke end hone tak rehta hai.

5. GlobalSession
@Scope(value = WebApplicationContext.SCOPE_GLOBAL_SESSION,
proxyMode = ScopedProxyMode.TARGET_CLASS)
GlobalSession scope portlet-based web applications mein use hota hai. Yeh
scope ek global HTTP session ke liye bean ka ek instance create karta hai.

6. Application
@Scope(value = WebApplicationContext.SCOPE_APPLICATION)

99
Application scope mein, bean ka ek instance ServletContext ke dauran create
hota hai aur application ke lifecycle ke dauran rehta hai.

Proxies:

Class-based Proxy: When ScopedProxyMode.TARGET_CLASS is used, Spring


creates a class-based proxy. This proxy intercepts calls to the bean and routes
them to the correct instance based on the current session.

Lifecycle Management: The proxy helps manage the lifecycle of the scoped bean
correctly. It ensures that each session gets its own instance, even when the bean is
injected into a singleton or prototype bean.

134. Spring Actuators


It provides production-ready features to help monitor and manage our
application. It exposes a set of built-in endpoints that allow you to access
information about the application’s health, metrics, environment, loggers, info
and more.

1. /actuator/health: Is endpoint se application ki health status retrieve ki ja


sakti hai. Yeh bataata hai ki kya application theek se run kar rahi hai ya nahi.

2. /actuator/info: Is endpoint se custom application information retrieve ki ja


sakti hai. Jaise ki version number, build information, etc.

3. /actuator/metrics: Is endpoint se various application metrics retrieve ki ja


sakti hain jaise ki CPU usage, memory usage, request counts, etc.

4. /actuator/loggers: Is endpoint se application ke logging levels aur


configuration retrieve ki ja sakti hai.

5. /actuator/threaddump: Is endpoint se application ke thread dumps retrieve


ki ja sakti hain jo debugging mein madad karte hain.

Boot Actuator is a powerful tool for monitoring and managing your Spring Boot
applications in production, providing crucial insights into application health and
performance.

135. What is aspect-oriented programming in the spring framework?


100
In a Spring Boot application, multiple components often require functionalities
like:
⦁ Logging
⦁ Security checks
⦁ Exception handling
⦁ Performance monitoring
⦁ Transaction management
Instead of writing these functionalities in every class, AOP provides a way to define
them separately and apply them where needed.
Real-Life Use Cases of AOP in Spring Boot
⦁ Logging (e.g., logging method calls and execution time)
⦁ Security (e.g., checking user roles before execution)
⦁ Transaction Management (e.g., @Transactional behind the scenes)
⦁ Caching (e.g., applying caching for certain operations)
⦁ Exception Handling (e.g., centralized exception handling)

136. Difference between @Component and @Bean.

137. Difference between @RequestParam and @PathVariable.

101
137. Differance between URI and URL .
⦁ URI = Identifies a resource uniquely (like a name, ID, or ISBN).
⦁ URL = Specifies where and how to access the resource (like an address or
website link).
Example:
1. Website Domain vs. Web Page Link
⦁ URI (General Identification): https://example.com is a URI because it
identifies a website.
⦁ URL (Locator + Access Method): https://example.com/products/shoes?
color=red&size=9 is a URL because it provides the exact location and access
details for a specific product.
2. Email Address vs. Webmail Link
⦁ URI (Identifier Only): Your email address is an identifier but doesn’t specify
how to access it. Example: mailto:johndoe@example.com
⦁ URL (Locator + Method): A webmail link that directs you to an interface
where you can read/send emails. Example:
https://mail.google.com/mail/u/0/#inbox
138. @RequestBody
102
In Spring Boot, the @RequestBody annotation is used to bind the request body
(the data sent by the client) to a method parameter in a controller method. It is
typically used in RESTful APIs to receive data as JSON or XML payloads in POST,
PUT, PATCH, or DELETE requests.
Receiving JSON Data: When a client sends a JSON object in a request body, you
can use @RequestBody to map this JSON object to a Java object.
Automatic Conversion: Spring automatically converts the JSON data to a Java
object using a message converter (usually Jackson).
139. Difference between @Service and @Component

140. How to disable auto configuration in springboot?


1. Disabling Specific Auto-Configuration Classes
⦁ @SpringBootApplication(exclude = { DataSourceAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class })

⦁ @EnableAutoConfiguration(exclude =
{ DataSourceAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class })

⦁ spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jd
bc.DataSourceAutoConfiguration,org.springframework.boot.autoconfigure
.data.jpa.JpaRepositoriesAutoConfiguration

2.Disabling All Auto-Configurations

103
⦁ @SpringBootApplication(exclude = { AutoConfiguration.class })

141. How to disable bean in springboot?


1.Excluding Auto-Configuration
⦁ @SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
2.Removing @Component or @Service Annotations
⦁ // @Component
4.Using Profiles {Preferred}
@Configuration
@Profile("!disableMyBean")
public class MyBeanConfig {}
5.Conditional Bean Creation {Preferred}
@Bean
@ConditionalOnProperty(name = "mybean.enabled", havingValue =
"true", matchIfMissing = true)
public MyBean myBean() {}
3.Using Configuration Classes
⦁ Aap apni configuration classes ko modular bana sakte hain aur beans ko
conditionally include/exclude kar sakte hain
@Bean
@ConditionalOnMissingBean(name = "myBean")
public MyBean myBean() {}
142. How to Tell an Auto-Configuration to Back Away When a Bean
Exists?
In Spring Boot, to make an auto-configuration step back when a bean already
exists, we use the @ConditionalOnMissingBean annotation. This tells Spring Boot
to only create a bean if it doesn't already exist in the context.

For example, if we are auto-configuring a data source but want to back off when a
data source bean is manually defined, we annotate the auto-configuration method
with @ConditionalOnMissingBean(DataSource.class). This ensures our custom
configuration takes precedence, and Spring Boot's auto-configuration will not
interfere if the bean is already defined.
143. How to create Custome Annotation in SpringBoot ?

104
// Custom annotation for class level
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface ClassLevelAnnotation {
String value() default "";
}

// Custom annotation for method level


@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface MethodLevelAnnotation {
String value() default "";
}

// Custom annotation for field level


@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
public @interface FieldLevelAnnotation {
String value() default "";
}

# Handle the Custom Annotations( Class Level & Method Level )


To handle these custom annotations, we can use Spring AOP (Aspect-
Oriented Programming) to define aspects that intercept annotated
elements.
@Aspect
@Component
public class CustomAnnotationAspect {

@Before("@within(ClassLevelAnnotation) ||
@annotation(ClassLevelAnnotation)")
public void handleClassLevelAnnotation() {
System.out.println("Class level annotation is present");
}

@Before("@annotation(MethodLevelAnnotation)")
public void handleMethodLevelAnnotation() {

105
System.out.println("Method level annotation is
present");
}

// Field level annotations are typically handled through


reflection,
// as AOP doesn't directly support interception of field
access.
}

# Handling Field Level Annotations with Reflection


public class FieldAnnotationProcessor {

public static void processFieldAnnotations(Object obj) {


Class<?> clazz = obj.getClass();
for (Field field : clazz.getDeclaredFields()) {
if (field.isAnnotationPresent(FieldLevelAnnotation.class)) {
FieldLevelAnnotation annotation =
field.getAnnotation(FieldLevelAnnotation.class);
System.out.println("Field: " + field.getName() + ",
Annotation Value: " + annotation.value());
}
}
}
}
// Usage
MyClass myClass = new MyClass();
FieldAnnotationProcessor.processFieldAnnotations(myClass);

144. How can i access static properties from application.properties file ?


1. @ConfigurationProperties :-
This method is highly preferred in live projects because it promotes strong typing
and is well-suited for handling groups of related properties. It makes the
configuration management more organized and easier to maintain.

i) application.properties file
my.property=someValue
my.anotherProperty=42

ii) Create a POJO to hold the properties:

106
@Configuration
@ConfigurationProperties(prefix = "my")
@Data
public class MyProperties {
private String property;
private int anotherProperty;
}

iii) Enable configuration properties in your main application class:


@SpringBootApplication
@EnableConfigurationProperties(MyProperties.class)
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}

iv) Access the properties in your components:


@Component
public class MyComponent {
private final MyProperties myProperties;
@Autowired
public MyComponent(MyProperties myProperties) {
this.myProperties = myProperties;
}
public void printProperty() {
System.out.println("Property value: " +
myProperties.getProperty());
System.out.println("Another property value: " +
myProperties.getAnotherProperty());
}
}

2. @Value Annotation :-
This method is straightforward and often used for injecting single property values.
It is simpler for basic needs but less preferred for more complex configurations.
# Access the properties in your components:
@Component
public class MyComponent {
@Value("${my.property}")

107
private String myProperty;
@Value("${my.anotherProperty}")
private int anotherProperty;
public void printProperty() {
System.out.println("Property value: " +
myProperty);
System.out.println("Another property value: " +
anotherProperty);
}
}

145. Give me example of @Qualifier and @Autowired.


Scenario: Payment Service Implementation
Suppose we have an application that supports multiple payment methods:
CreditCardPayment and PayPalPayment. We want to inject the correct payment
service based on a specific qualifier.

# Step-by-Step Implementation

1. Define the Payment Service Interface


public interface PaymentService {
void processPayment(double amount);
}

2. Implement Payment Services

CreditCardPayment.java
@Service
public class CreditCardPayment implements PaymentService {
@Override
public void processPayment(double amount) {
System.out.println("Processing credit card payment of " +
amount);
}
}

PayPalPayment.java
@Service
public class PayPalPayment implements PaymentService {
@Override

108
public void processPayment(double amount) {
System.out.println("Processing PayPal payment of " +
amount);
}
}

3. Configure the Payment Process


@Component
public class PaymentProcessor {
@Autowired
@Qualifier("creditCardPayment")
private PaymentService paymentService;

public void process(double amount) {


paymentService.processPayment(amount);
}
}

146. How does a Spring application get started?


A Spring application typically starts by initializing a Spring ApplicationContext,
which manages the beans and dependencies. In Spring Boot, this is often triggered
by calling SpringApplication.run() in the main method, which sets up the default
configuration and starts the embedded server if necessary.

147. What are the Spring Boot Starter Dependencies?


Spring Boot Starters are a set of convenient dependency descriptors that we can
include in our application. Each starter provides a quick way to add and configure a
specific technology or a set of related technologies to our application, such as web,
data, or security, simplifying dependency declarations.

148. What does the @SpringBootApplication annotation do internally?


The @SpringBootApplication annotation is a convenience annotation that
combines @Configuration, @EnableAutoConfiguration, and @ComponentScan.
This triggers Spring's auto-configuration mechanism to automatically configure the
application based on its included dependencies, scans for Spring components, and
sets up configuration classes.

109
149. What is Auto-wiring?
Autowiring in Spring is the process by which Spring automatically injects the
dependencies of objects into one another. It eliminates the need for manual bean
wiring and makes the code cleaner and easier to maintain.

150. What is ApplicationRunner in SpringBoot?


ApplicationRunner is an interface in Spring Boot that allows we to execute code
after the application is fully started. We can implement this interface and define
our logic in the run method, which will execute just after the application context is
loaded.

151. What is the starter dependency of the Spring Boot module ?


A starter dependency in Spring Boot is designed to provide a comprehensive set of
dependencies that are typically used together for a specific feature or application
need. Examples include springboot-starter-web for web applications, spring-boot-
starter-data-jpa for database access, and springboot-starter-security for security
configurations.

152. How to disable a specific auto-configuration class ?


We can disable specific auto-configuration classes in Spring Boot by using the
exclude attribute of the @EnableAutoConfiguration annotation or by setting the
spring.autoconfigure.exclude property in our application.properties or
application.yml file.

153. Can we disable the default web server in a Spring Boot


application?
Yes, we can disable the default web server in a Spring Boot application by setting
the spring.main.web-application-type property to none in our
application.properties or application.yml file. This will result in a non-web
application, suitable for messaging or batch processing jobs.

154. Can we create a non-web application in Spring Boot?


Absolutely, Spring Boot is not limited to web applications. We can create
standalone, non-web applications by disabling the web context. This is done by
setting the application type to 'none', which skips the setup of web-specific
110
contexts and configurations.

158. Can we override or replace the Embedded Tomcat server in Spring


Boot?
Yes, we can override or replace the embedded Tomcat server in Spring Boot. If we
prefer using a different server, like Jetty or Undertow, we simply need to exclude
Tomcat as a dependency and include the one we want to use in our pom.xml or
build.gradle file.

Spring Boot automatically configures the new server as the embedded server for
our application. This flexibility allows us to choose the server that best fits our
needs without significant changes to our application, making Spring Boot
adaptable to various deployment environments and requirements.

155. Explain @RestController annotation in Spring Boot.


The @RestController annotation in Spring Boot is used to create RESTful web
controllers. This annotation is a convenience annotation that combines
@Controller and @ResponseBody, which means the data returned by each
method will be written directly into the response body as JSON or XML, rather than
through view resolution.

156. What are the differences between @SpringBootApplication and


@EnableAutoConfiguration annotation?
@SpringBootApplication is a comprehensive annotation that encompasses
@EnableAutoConfiguration (for sensible defaults based on classpath),
@ComponentScan (to scan for components), and @Configuration (to allow
registration of extra beans in the context or import additional configuration
classes). @EnableAutoConfiguration just handles the automation of configuration
based on the classpath.

157. What is the purpose of using @ComponentScan in class files?


The @ComponentScan annotation is used in Spring Boot to specify the packages to
look for Spring components. It directs Spring where to search for annotated
components, configurations, and services, automatically detecting and registering

111
them as beans in the ApplicationContext.

158. What is the difference between Constructor and Setter Injection?


Constructor Injection involves injecting dependencies through the constructor of a
class, ensuring that an object is always created with all its dependencies. Setter
Injection involves injecting dependencies through setter methods after the object
is constructed. Constructor Injection is generally preferred for required
dependencies, while Setter Injection offers flexibility for optional dependencies.

159. Explain the flow of Spring MVC.


In Spring MVC, a request is first received by the DispatcherServlet, which acts as a
front controller. It then delegates the request to a handler based on mappings. The
handler processes the request, populating the model if necessary, and returns a
view name. The DispatcherServlet then renders the view using the model data.

160. What is Dispatcher Servlet.


The DispatcherServlet is a central servlet in Spring MVC that dispatches requests to
various handlers, coordinates the workflow by working with different components,
and handles web application configuration elements like view resolution, locale,
theme resolution, and more.

161. What is the purpose of @ModelAttribute?


The @ModelAttribute annotation is used to bind method parameters or method
return values to a named model attribute, exposed to a web view. It can also be
used to automatically populate model attributes with data from HTTP requests.

162. What is the use of BindingResult.


BindingResult is used to handle and retrieve validation errors after Spring has
bound web request parameters to an object. It is passed as a parameter
immediately after the model attribute in controller methods and can check for
errors in the object fields.

163. How does Spring MVC support for validation.


Spring MVC supports validation through the JSR-303/JSR-349 Bean Validation APIs.
By annotating fields of a model class with standard annotations like @NotNull,

112
@Size, etc., and using a Validator implementation, Spring can automatically ensure
that model attributes adhere to defined rules before processing them.

164. How to send the data from Controller to UI.


Data can be sent from a Controller to the UI by adding attributes to the Model
object or using a ModelAndView object. These attributes are then accessed in the
view layer, typically through JSPs or other view templates, to render the response.

165. Describe the annotations to validate the form data.


To validate form data, annotations like @NotNull, @Min, @Max, @Size, and
@Pattern can be used. These annotations are part of the Java Bean Validation API,
which Spring integrates to automatically apply validation rules before processing
the form submission.

166. How to bind the form data to Model Object in Spring MVC.
Form data is bound to model objects in Spring MVC using @ModelAttribute
annotation. This automatically populates a model object with request parameters
matching the object's field names.

167. Explain the use of @ResponseEntity annotation.


@ResponseEntity is used to create a full HTTP response, including status code,
headers, and body. It's useful in RESTful controllers where you need to provide
detailed responses directly from your methods.

168. How can we handle multiple beans of the same type?


To handle multiple beans of the same type in Spring, we can use @Qualifier
annotation. This lets us specify which bean to inject when there are multiple
candidates.

For example, if there are two beans of type DataSource, we can give each a name
and use @Qualifier("beanName") to tell Spring which one to use.

Another way is to use @Primary on one of the beans, marking it as the default
choice when injecting that type.

169. How would you handle inter-service communication in a

113
microservices architecture using Spring Boot?
For simple, direct communication, I would use RestTemplate, which allows
services to send requests and receive responses like a two-way conversation.

For more complex interactions, especially when dealing with multiple services, I
would choose Feign Client. Feign Client simplifies declaring and making web
service clients, making the code cleaner and the process more efficient.

For asynchronous communication, where immediate responses aren't necessary, I


would use message brokers like RabbitMQ or Kafka. These act like community
boards, where services can post messages that other services can read and act
upon later. This approach ensures a robust, flexible communication system
between microservices.

170. Discuss how you would add a GraphQL API to an existing Spring
Boot RESTful service.
First, I'd add GraphQL Java and GraphQL Spring Boot starter dependencies to my
pom.xml or build.gradle file. Secondly, I'd create a GraphQL schema file
(schema.graphqls) in the src/main/resources folder.

Then I'd data fetchers implement them to retrieve data from the existing services
or directly from the database and moving ahead, I'd configure a GraphQL service
using the schema and data fetchers

Then I would expose the graphql endpoint and make sure it is correctly
configured. Finally, I'd test the GraphQL API using tools like GraphiQL or Postman
to make sure it's working as expected.

171. Imagine Your application requires data from an external REST API
to function. Describe how you would use RestTemplate or WebClient
to consume the REST API in your Spring Boot application.
Talking about RestTemplate: First, I would define a RestTemplate bean in a
configuration class using @Bean annotation so it can be auto-injected anywhere I
need it. Then, I'd use RestTemplate to make HTTP calls by creating an instance and
using methods like getForObject() for a GET request, providing the URL of the

114
external API and the class type for the response.

Talking about WebClient : I would define a WebClient bean similarly using @Bean
annotation. Then I would use this WebClient to make asynchronous requests,
calling methods like get(), specifying the URL, and then using retrieve() to fetch the
response. I would also handle the data using methods like bodyToMono() or
bodyToFlux() depending on if I am expecting a single object or a list.

172. How you would use Spring WebFlux to consume data from an
external service in a non-blocking manner and process this data
reactively within your Spring Boot application.
In a Spring Boot app using Spring WebFlux, I'd use WebClient to fetch data from
an external service without slowing things down. WebClient makes it easy to get
data in a way that doesn't stop other parts of the app from working.

When the data comes in, it's handled reactively, meaning I can work with it on the
go like filtering or changing it without waiting for everything to finish loading. This
keeps the app fast and responsive, even when dealing with a lot of data or making
many requests.

173. How can you enable and use asynchronous methods in a


SpringBoot application?
To enable and use asynchronous methods in a Spring Boot application:
• First, I would add the @EnableAsync annotation to one of my configuration
classes. This enables Spring's asynchronous method execution capability.

• Next, I would mark methods I want to run asynchronously with the @Async
annotation. These methods can return void or a Future type if I want to track the
result.

• Finally, I would call these methods like any other method. Spring takes care of
running them in separate threads, allowing the calling thread to proceed without
waiting for the task to finish.

Remember, for the @Async annotation to be effective, the method calls must be
made from outside the class. If I call an asynchronous method from within the

115
same class, it won't execute asynchronously due to the way Spring proxying works.

174. You are creating an endpoint in a Spring Boot application that


allows users to upload files. Explain how you would handle the file
upload and where you would store the files.
To handle file uploads in a Spring Boot application, I would use @PostMapping
annotation to create an endpoint that listens for POST requests.

Then I would add a method that accepts MultipartFile as a parameter in the


controller. This method would handle the incoming file.

175. How would you implement efficient handling of large file uploads
in a Spring Boot REST API, ensuring that the system remains responsive
and scalable?
To handle big file uploads in a Spring Boot REST API without slowing down the
system, I'd use a method that processes files in the background and streams them
directly where they need to go, like a hard drive or the cloud.

This way, the main part of the app stays fast and can handle more users or tasks at
the same time.

Also, by saving files outside the main server, like on Amazon S3, it helps the app
run smoothly even as it grows or when lots of users are uploading files.

176. How to handle a 404 error in spring boot?


To handle a 404 error in Spring Boot, we make a custom error controller. we
implement the ErrorController interface and mark it with @Controller.

Then, we create a method that returns our error page or message for 404 errors,
and we map this method to the /error URL using @RequestMapping.

In this method, we can check the error type and customize what users see when
they hit a page that doesn't exist. This way, we can make the error message or
page nicer and more helpful.

177. How to get the list of all the beans in your spring boot application?
Step 1: First I would Autowire the ApplicationContext into the class where I want
116
to list the beans.

Step 2: Then I would Use the getBeanDefinitionNames() method from the


ApplicationContext to get the list of beans.

178. Explain the difference between cache eviction and cache


expiration.
Cache eviction is when data is removed from the cache to free up space, based on
a policy like "least recently used."

Cache expiration is when data is removed because it's too old, based on a
predetermined time-tolive.

So, eviction manages cache size, while expiration ensures data freshness.

179. If you had to scale a Spring Boot application to handle high traffic,
what strategies would you use?
To scale a Spring Boot application for high traffic, we can:
⦁ Add more app instances (horizontal scaling) and use a load balancer to
spread out the traffic.
⦁ Break our app into microservices so each part can be scaled independently.
⦁ Use cloud services that can automatically adjust resources based on our app's
needs.
⦁ Use caching to store frequently accessed data, reducing the need to fetch it
from the database every time.
⦁ Implement an API Gateway to handle requests and take care of things like

authentication
180. What strategies would you use to optimize the performance of a
Spring Boot application?
Let’s say my Spring Boot application is taking too long to respond to user requests.
I could:

117
⦁ Implement caching for frequently accessed data.
⦁ Optimize database queries to reduce the load on the database.
⦁ Use asynchronous methods for operations like sending emails.
⦁ Load Balancer if traffic is high
⦁ Optimize the time complexity of the code
⦁ Use webFlux to handle a large number of concurrent connections.

181. Your Spring Boot application is experiencing performance issues


under high load. What are the steps you would take to identify and
address the performance?
First, I would identify the specific performance issues using monitoring tools like
Spring Boot Actuator or Splunk.

I would also analyze application logs and metrics to spot any patterns or errors,
especially under high load.

Then, I would start a performance tests to replicate the issue and use a profiler for
code-level analysis.

After getting findings, I might optimize the database, implement caching, or use
scaling options. It's also crucial to continuously monitor the application to prevent
future issues.

182. How Is Spring Security Implemented In A Spring Boot Application?


To add the spring security in a spring boot application, we first need to include
spring security starter dependency in the POM file

Then, we create a configuration class extending WebSecurityConfigurerAdapter to


customize security settings, such as specifying secured endpoints and configuring
the login and logout process. we also implement the UserDetailsService interface
to load user information, usually from a database, and use a password encoder like
BCryptPasswordEncoder for secure password storage.

We can secure specific endpoints using annotations like @PreAuthorize, based on


roles or permissions.

118
This setup ensures that my Spring Boot application is secure, managing both
authentication and authorization effectively.

183. Can you explain the difference between authentication and


authorization in Spring Security?
In Spring Security, authentication is verifying who I am, like showing an ID. It
checks my identity using methods like passwords or tokens.

Authorization decides what I'm allowed to do after I'm identified, like if I can
access certain parts of an app. It's about permissions.

So, authentication is about confirming my identity, and authorization is about my


access rights based on that identity.

184. Explain what is AuthenticationManager and ProviderManager in


Spring security.
The AuthenticationManager in Spring Security is like a checkpoint that checks if
user login details are correct. The ProviderManager is a specific type of this
checkpoint that uses a list of different ways (providers) to check the login details.

It goes through each way to find one that can confirm the user’s details are valid.
This setup lets Spring Security handle different login methods, like checking against
a database or an online service, making sure the user is who they say they are.

185. What is the best practice for storing passwords in a Spring Security
application?
The best practice for storing passwords in a Spring Security application is to never
store plain text passwords. Instead, passwords should be hashed using a strong,
one-way hashing algorithm like bcrypt, which Spring Security supports.

Hashing converts the password into a unique, fixed-size string that cannot be
easily reversed.

Additionally, using a salt (a random value added to the password before hashing)
makes the hash even more secure by preventing attacks like rainbow table

119
lookups. This way, even if the password data is compromised, the actual
passwords remain protected.

186. Explain salting and its usage in spring security


Salting in Spring Security means adding a random piece of data to a password
before turning it into a hash, a kind of scrambled version.

This makes every user's password hash unique, even if the actual passwords are
the same. It helps stop attackers from guessing passwords using known hash lists.

When a password needs to be checked, it's combined with its salt again, hashed,
and then compared to the stored hash to see if the password is correct. This way,
the security of user passwords is greatly increased.

187. In your application, there are two types of users: ADMIN and
USER. Each type should have access to different sets of API endpoints.
Explain how you would configure Spring Security to enforce these
access controls based on the user's role.
In the application, to control who can access which API endpoints, I can use Spring
Security to set rules based on user roles. I can configure it so that only ADMIN
users can reach admin related endpoints and USER users can access user-related
endpoints.

This is done by defining patterns in the security settings, where I link certain URL
paths with specific roles, like making all paths starting with "/admin" accessible
only to users with the ADMIN role, and paths starting with "/user" accessible to
those with the USER role. This way, each type of user gets access to the right parts
of the application.

188. What do you mean by digest authentication?


Digest authentication is a way to check who is trying to access something online
without sending their actual password over the internet. Instead, it sends a hashed
(scrambled) version of the password along with some other information.

When the server gets this scrambled password, it compares it with its own
120
scrambled version. If they match, it means the user's identity is verified, and access
is granted. This method is more secure because the real password is never exposed
during the check.

189. How does Spring Security handle session management, and what
are the options for handling concurrent sessions Spring Security
handles session management by creating a session for the user upon
successful authentication.
For managing concurrent sessions, it provides options to control how many
sessions a user can have at once and what happens when the limit is exceeded.

For example, I can configure it to prevent new logins if the user already has an
active session or to end the oldest session. This is managed through the session
management settings in the Spring Security configuration, where I can set policies
like maximumSessions to limit the number of concurrent sessions per user.

190. Imagine you are designing a Spring Boot application that interfaces
with multiple external APIs. How would you handle API rate limits and
failures?
To handle API rate limits and failures in a Spring Boot application, I would

• Use a circuit breaker to manage failures

• Implement rate limiting to avoid exceeding API limits

• Add a retry mechanism with exponential backoff for temporary issues

• Use caching to reduce the number of requests.

This approach helps keep the application reliable and efficient.

191. To protect your application from abuse and ensure fair usage, you
decide to implement rate limiting on your API endpoints. Describe a
121
simple approach to achieve this in Spring Boot.
To implement rate limiting in a Spring Boot application, a simple approach is to use
a library like Bucket4j or Spring Cloud Gateway with built-in rate-limiting
capabilities. By integrating one of these libraries, I can define policies directly on
my API endpoints to limit the number of requests a user can make in a given time
frame.

This involves configuring a few annotations or settings in my application properties


to specify the rate limits. This setup helps prevent abuse and ensures that all users
have fair access to my application's resources, maintaining a smooth and reliable
service.

192. Explain Cross-Origin Resource Sharing (CORS) and how you would
configure it in a Spring Boot application.
Cross-Origin Resource Sharing allows a website to safely access resources from
another website. In Spring Boot, we can set up CORS by adding @CrossOrigin to
controllers or by configuring it globally.

This tells our application which other websites can use its resources, what type of
requests they can make, and what headers they can use.

This way, We control who can interact with our application, keeping it secure while
letting it communicate across different web domains.

Same-Origin Policy:-

Same-Origin Policy yeh ensure karta hai ki ek web page sirf un resources ko access
kar sakta hai jo uske apne domain, protocol aur port ke sath match karte hain. For
example, agar aapka web page https://example.com pe host hai, toh yeh sirf
https://example.com ke resources ko access kar sakta hai, na ki https://another-
domain.com ke.

192. Explain CSRF


CSRF (Cross-Site Request Forgery) Protection
CSRF attack tab hota hai jab ek malicious website user ke browser ko kisi doosri

122
website pe action perform karne ke liye trick karti hai, jahaan user already
authenticated hota hai. Example ke liye, agar user apne bank account mein logged
in hai, toh ek CSRF attack user ke browser ko bina user ke consent ke attacker ke
account mein paise transfer karne ke liye trick kar sakta hai.
CSRF protection ke liye aksar ek CSRF token use hota hai jo request ke sath bhejna
padta hai. Yeh token unique aur secret hota hai aur server isse verify karta hai
taaki request legitimate ho.

193. How can you use Spring Expression Language (SpEL) for fine
grained access control?
I can use Spring Expression Language (SpEL) for fine-grained access control by
applying it in annotations like @PreAuthorize in Spring Security.

With SpEL, I can create complex expressions to evaluate the user's context, such as
roles, permissions, and even specific method parameters, to decide access rights.

This allows for detailed control over who can access what in the application,
making the security checks more dynamic and tailored to the specific scenario,
ensuring that users only access resources and actions they are authorized for.

194. Explain the process of creating a Docker image for a Spring Boot
application.
To make a Docker image for a Spring Boot app, we start by writing a Dockerfile.
This file tells Docker how to build our app's image.

We mention which Java version to use, add our app's .jar file, and specify how to
run our app.

After writing the Dockerfile, we run a command docker build -t myapp:latest . in


the terminal.

This command tells Docker to create the image with everything our app needs to
run. By doing this, we can easily run our Spring Boot app anywhere Docker is
available, making our app portable and easy to deploy

195. How to Deploy Spring Boot Web Applications as Jar and War Files?

123
To deploy Spring Boot web applications, we can package them as either JAR or
WAR files. For a JAR, we use Spring Boot's embedded server, like Tomcat, by
running the command mvn package and then java -jar target/myapplication.jar.

If we need a WAR file for deployment on an external server, we change the


packaging in the pom.xml to <packaging>war</packaging>, ensure the application
extends SpringBootServletInitializer, and then build with mvn package. The WAR
file can then be deployed to any Java servlet container, like Tomcat or Jetty.

196. Discuss the integration of Spring Boot applications with CI/CD


pipelines.
Integrating Spring Boot apps with CI/CD pipelines means making the process of
building, testing, and deploying automated.

When we make changes to our code and push them, the pipeline automatically
builds the app, runs tests, and if everything looks good, deploys it. This uses tools
like Jenkins or GitHub Actions to automate tasks, such as compiling the code and
checking for errors.

If all tests pass, the app can be automatically sent to a test environment or directly
to users. This setup helps us quickly find and fix errors, improve the quality of our
app, and make updates faster without manual steps.

197. Your application behaves differently in development and


production environments. How would you use Spring profiles to
manage these differences?
To handle differences between development and production environments, I
would use Spring profiles. By defining environment-specific configurations in
application-dev.properties for development and application-prod.properties for
production, I can easily switch behaviors based on the active profile.

Activating these profiles is simple, either by setting the spring.profiles.active


property, using a command-line argument, or through an environment variable.

Additionally, with the @Profile annotation, I would selectively load certain beans

124
or configurations according to the current environment and ensuring that my
application adapts seamlessly to both development and production settings.

198. What is Monolith Architecture


If we develop all the functionalities in single project then it is called as Monolith
architecture based application .
We will package our application as a jar/war to deploy into server As monolith
application contains all functionalities, it will become fat jar/war
Advantages
1) Simple to develop
2) Everything is available at once place
3) Configuration required only once

Dis-Advantages
1) Difficult to maintain
2) Dependencies among the functionalites
3) Single Point Of Failure
4) Entire Project Re-Deployment

# To overcome the problems of Monolithic, Microservices architecture came into


market
Microservices is not a programming language
Microservices is not a framework
Microservices is not an API
Note: One REST API is called as one Microservice
2. Advantage and Disadvantage of Microservices ?
Advantages
1) Loosely Coupling
2) Easy To maintain

125
3) Faster Development
4) Quick Deployment
5) Faster Releases
6) Less Downtime
7) Technology Independence (We can develop backend apis with multiple
technologies)
Dis-Advantages
1) Bounded Context (Deciding no.of services to be created)
2) Lot of configurations(in every microservices we have to write some common
configuration Ex: Datasource, SMTP, Kafka, Redis)
3) Visibility

199. Circuit Breaker Design Pattern :-


The Circuit Breaker Design Pattern is used in distributed systems and microservices
to improve fault tolerance and resilience.
Why Use Circuit Breaker?
⦁ Prevents Overloading → Stops sending requests to a failing service.
⦁ Improves Fault Tolerance → Avoids excessive failures.
⦁ Enhances Performance → Redirects traffic when a service is down.
⦁ Reduces Latency → Prevents long wait times due to repeated failures.
How This Works?
⦁ If API calls fail frequently (above 50% threshold) → Circuit switches to Open
state.
⦁ After 5 seconds → Circuit moves to Half-Open and allows limited requests.
⦁ If requests succeed → Circuit moves back to Closed state.
⦁ If failures continue → Circuit remains Open, returning a fallback response.
Where to Use Circuit Breaker?
⦁ Microservices → Prevent cascading failures.
⦁ Third-party APIs → Avoid downtime affecting the entire system.
⦁ Database Calls → Stop excessive load during failures.
⦁ External Payment/Booking Services → Improve resilience.
126
200. API Gateway Design Pattern :-
The API Gateway Pattern is commonly used in microservices architecture to
provide a single entry point for clients to interact with multiple services.
Why Use API Gateway?
⦁ Single Entry Point :- Clients send requests to one endpoint, and the gateway
routes them to the appropriate services.
⦁ Load Balancing :- Distributes traffic across multiple instances of a service.
⦁ Authentication & Security :- Handles JWT authentication, API keys, rate
limiting, and authorization.
Where to Use API Gateway?
⦁ Microservices Architecture – Centralized API management.
⦁ Security & Authentication – JWT, OAuth, API Keys.
⦁ Load Balancing – Distribute traffic across services.
⦁ Aggregation – Merge multiple service responses.
⦁ Rate Limiting – Prevent excessive requests.

200.

201. What are microservices?


Microservices are a software architecture style that structures an application as a
collection of loosely coupled services, which implement business capabilities. Each
service is self-contained and should implement a single business capability.
202. What are the challenges of microservices architecture?
Challenges include increased complexity in managing multiple services, difficulties
in testing and monitoring, network latency, data consistency, and the need for a
robust infrastructure for deployment and operations.
203. What is Spring Cloud and how it is useful for building
microservices?

127
Spring Cloud is one of the components of the Spring framework, it helps manage
microservices.

Imagine we are running an online store application, like a virtual mall, where
different sections handle different tasks. In this app, each store or section is a
microservice. One section handles customer logins, another manages the shopping
cart, one takes care of processing payments, and the other lists all the products.

Building and managing such an app can be complex because we need all these
sections to work together seamlessly. Customers should be able to log in, add
items to their cart, pay for them, and browse products without any problems.
That’s where Spring Cloud comes into the picture. It helps microservices in
connecting the section, balancing the crowd, keeping the secret safe, etc.

204. How can Spring Boot applications be made more resilient to


failures, especially in microservices architectures?
To make Spring Boot apps stronger against failures, especially when using many
services together, we can use tools and techniques like circuit breakers and retries
with libraries like Resilience4j. A circuit breaker stops calls to a service that's not
working right, helping prevent bigger problems. Retry logic tries the call again in
case it fails for a minor reason.

Also, setting up timeouts helps avoid waiting too long for something that might
not work. Plus, keeping an eye on the system with good logging and monitoring
lets spot and fix issues fast. This approach keeps the app running smoothly, even
when some parts have trouble.

205. What is the role of an API Gateway in microservices?


An API Gateway is a management tool that sits between a client and a collection of
backend services. It acts as a reverse proxy to accept API calls, aggregate the
services required to fulfill them, and return the appropriate result.
206. Explain the concept of Service Discovery in microservices.
Service Discovery is a process used in microservices architectures to automatically
detect services within a system. It helps services find and communicate with each
other without hard-coding service locations, typically using a registry that keeps

128
track of all service endpoints.
207. What is a Circuit Breaker in microservices?
A Circuit Breaker is a design pattern used in microservices to prevent a network or
service failure from cascading to other services. It monitors for failures and, once a
threshold is reached, it trips the circuit breaker, which prevents further failures.
208. How do you handle data consistency in a microservices
architecture?
Data consistency in microservices can be managed through approaches like event
driven architecture, using eventual consistency, and implementing transactional
outbox patterns where database transactions and event publishing are done
atomically.
209. What is containerization and how does it benefit microservices?
Containerization involves encapsulating an application and its environment into a
container that can be run on any platform. It benefits microservices by ensuring
consistency across environments, facilitating scalability, and simplifying
deployment and operations.
210. Explain the concept of Blue/Green deployment in microservices.
Blue/Green deployment is a technique to reduce downtime and risk by running
two identical production environments called Blue and Green. Only one of the
environments is live at a time, where the Green environment is used to mirror the
Blue before it becomes live.
211. How do microservices communicate with each other?
Microservices communicate with each other using lightweight protocols such as
HTTP/REST, AMQP for messaging systems, or even gRPC for high-performance RPC
communication.
212. What is Domain-Driven Design (DDD) in microservices?
Domain-Driven Design is an approach to developing software for complex needs by
deeply connecting the implementation to an evolving model of the core business
concepts. It is used in microservices to divide systems into bounded contexts and
ensure each service models a specific domain.
213. How does microservices architecture handle security?
Security in microservices is handled through patterns like authentication gateways,
129
securing service to-service communication through protocols like HTTPS and
OAuth2, and using JSON Web Tokens (JWT) for maintaining secure and scalable
user access control.
214. Explain the use of Observability in microservices.
Observability in microservices involves monitoring and tracking the internal states
of systems by using logs, metrics, and traces. This helps in understanding system
performance and troubleshooting issues in a distributed system.
215. What is the role of a configuration server in microservices?
A configuration server manages external configuration properties for applications
in a microservice architecture. This allows for easier maintenance of service
configurations without the need to redeploy or restart services when
configurations change.
216. How do you ensure fault tolerance in microservices?
Fault tolerance in microservices can be ensured by implementing patterns such as
Circuit Breaker, Failover, Retry mechanisms, and using Rate Limiters to prevent
system overload.
217. What is a Saga pattern in microservices?
The Saga pattern is a way to manage data consistency across microservices by
using a sequence of local transactions. Each local transaction updates data within
a single service and publishes an event or message to trigger the next local
transaction in the saga.
218. What is an anti-corruption layer in microservices?
An anti-corruption layer is a component that translates between different
subsystems in a microservices architecture, protecting each service from changes
in other services. This layer helps maintain independent and decoupled service
development.
219. Explain how microservices can be scaled.
Microservices can be scaled horizontally by adding more instances of the services
to handle increased load, or vertically by adding more resources like CPU or
memory to existing instances. This can be dynamically managed using
orchestration tools like Kubernetes.
220. What is Event Sourcing in microservices?
130
Event Sourcing is a pattern where the state of a business entity is stored as a
sequence of state changing events. Whenever the state of a business entity needs
to be determined, these events are replayed to achieve the current state. This is
useful in microservices for ensuring all changes are captured and can be
reconstructed in case of failures.

###################### -------JWT----- ##############################

In my application there is four way to login like Continue with Google, Continue
with Facebook, Continue with Apple, Continue with Email( Traditional Way).

When We are going to login with Continue with Email then we have to pass the
First Name, Last Name, DOB, Email Id, Password. after filling these things when
user hit the SignUp botton. We internally Validate User Input and Encrypt the
Password and Save the User Details in Database. And We Generate JWT Token
Using User Details And We send JWT Token to client in the response request. In
client Side These Token is store in HTTP-only-cookies. So It then automatically uses
the token for subsequent authenticated requests.

1. Subsequent Requests with HttpOnly Cookies


⦁ The JWT is stored in an HttpOnly cookie after the user logs in or signs up.
⦁ HttpOnly cookies are automatically sent by the browser with every request to
the server (provided the request is made to the same domain or a domain
allowed by the cookie settings).
⦁ When the browser sends a request to the backend, the cookie containing the
JWT is included in the Cookie header automatically.

2. Token Verification in the Backend


⦁ A JwtAuthenticationFilter (or equivalent middleware) intercepts the incoming
HTTP request.
⦁ The filter extracts the JWT token from the Cookie header.
⦁ The backend validates the token:
⦁ Decodes the token using the secret key.
⦁ Checks the token’s expiration time, signature, and claims.
131
⦁ If the token is valid:
⦁ Extracts the user’s identity (e.g., userId, roles) from the token.
⦁ Associates this information with the current request (e.g., by creating a
SecurityContext in Spring Security).
⦁ If the token is invalid or expired:
⦁ The filter forwards the request to the JwtEntryPoint, which throws an
UnauthorizedException or sends a 401 Unauthorized response.

3. Response Sent Back


⦁ If Token is Valid:
⦁ The backend processes the request and sends the appropriate response to
the client.
⦁ The client doesn’t need to resend or manage the token explicitly.
⦁ If Token is Invalid/Expired:
⦁ The backend sends a 401 Unauthorized response.
⦁ Optionally, the client can prompt the user to log in again or attempt a
token refresh (if a refresh token is implemented).

We Use HTTP-only cookies for better security (prevents token theft via XSS).

Avoid using localStorage for sensitive information unless absolutely necessary.

Spring Security
When we add the dependency of spring security in POM.xml then it automatically
secure all the end point. So for customize it we Create we SecurityConfig Class and
Annotate it with " @Configuration, @EnableWebSecurity, @EnableMethodSecurity "

Define a Bean of SecurityFilterChain and pass HttpSecurity as method


parameter, in this method we use :
⦁ requestMatchers : To Specifies the HTTP request patterns that should be
secured. and
132
⦁ authenticated(): To Ensures that only authenticated users can access
certain resources.
⦁ permitAll(): To Allows all users, including unauthenticated ones.
⦁ sessionManagement().sessionCreationPolicy(SessionCreationPol
icy.STATELESS) :
⦁ It Configures session management for the application. and
SessionCreationPolicy.STATELESS indicates that the application will not
create or use HTTP sessions.
⦁ addFilterBefore(authFilter,UsernamePasswordAuthenticationFilter.class:

⦁ This Adds a custom authentication filter (authenticationFilter) to the


Spring Security filter chain.
⦁ The filter is added before the UsernamePasswordAuthenticationFilter,
which is responsible for processing username/password authentication
requests.
⦁ This allows the custom filter to perform additional authentication logic or
processing.
⦁ return http.build():
⦁ Builds and returns the configured HttpSecurity instance.
⦁ This instance represents the complete security configuration for the
application.

For Authentication we use JWT Token. also For Authorization for role based ...... .

JWT Implementation
JSON Web Tokens (JWT) are compact, URL-safe tokens used for securely
transmitting information between parties as a JSON object. They are commonly
used for authentication and information exchange.

The token is mainly composed of header, payload, signature. These three parts are
separated by dots(.).

1. Header: The header typically consists of two parts: the type of the token,
which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.

133
For example:
{
"alg": "HS256",
"typ": "JWT"
}

Then, this JSON is Base64Url encoded to form the first part of the JWT.

2. Payload : The second part of the token is the payload, which contains the
claims. Claims are statements about an entity (typically, the user) and additional
data. There are three types of claims: registered, public, and private claims.

1. Registered claims: These are a set of predefined claims which are not
mandatory but recommended, to provide a set of useful, interoperable
claims. Some of them are: iss (issuer), exp (expiration time), sub (subject),
aud (audience), and others.

2. Public claims: These can be defined at will by those using JWTs. But to avoid
collisions they should be defined in the IANA JSON Web Token Registry or be
defined as a URI that contains a collision resistant namespace.

3. Private claims: These are the custom claims created to share information
between parties that agree on using them and are neither registered or
public claims.

An example payload could be:


{
"sub": "1234567890",
"name": "John Doe",
"admin": true
}

The payload is then Base64Url encoded to form the second part of the JSON Web
Token.

3. Signature : To create the signature part you have to take the encoded header,
the encoded payload, a secret, the algorithm specified in the header, and sign that.

The signature is used to verify the message wasn't changed along the way, and, in
134
the case of tokens signed with a private key, it can also verify that the sender of
the JWT is who it says it is.

Programm Flow of JWT Authentication


1. Generating JWT Token
⦁ UserDetailsService Class
⦁ Contains the loadUserByUsername() method, which loads user details (like
username, password, and roles) from the database.

⦁ AuthenticationManager Interface
⦁ Used to authenticate the user's credentials via the authenticate() method.
⦁ This internally uses an AuthenticationProvider (e.g.,
DaoAuthenticationProvider, JwtAuthenticationProvider) to validate the
user's credentials.
⦁ If authentication is successful (authenticated = true), the JWT token is
generated.

⦁ JWTUtility Class
A utility class containing methods for creating and validating JWT tokens:
⦁ For Creating Tokens:
⦁ generateToken(): Generates a new token with user-specific claims.
⦁ createToken(): Builds the JWT with signing keys and claims.
⦁ getSignKey(): Returns the secret key used for signing the token.

⦁ For Validating Tokens:


⦁ extractUsername(): Extracts the username from the token.
⦁ extractClaim(): Extracts specific claims from the token.
⦁ extractAllClaims(): Retrieves all claims from the token.
⦁ validateToken(): Validates the token by checking the signature,
expiration, and other details.
Note: AuthenticationManager is not used to authenticate the JWT token itself but
to validate the user's credentials during login.

135
2. Validating JWT Token
⦁ JwtAuthFilter Class
⦁ Extends OncePerRequestFilter to ensure token validation logic is executed
exactly once for each incoming HTTP request.
⦁ Responsible for extracting and validating the JWT token from the request.

⦁ doFilterInternal() Method
⦁ Step 1: Extracts the token from the Authorization header.
⦁ Step 2: Validates the token using JWTUtility.
⦁ Checks the signature, expiration, and claims.
⦁ If valid, retrieves the username and loads the user details using
UserDetailsService.
⦁ Step 3: Sets the Authentication object in the SecurityContextHolder to
authenticate the user for the current request.
⦁ If the token is invalid, rejects the request and sends an error response.

3. Handling Unauthorized Access


We create JwtAuthenticationEntryPoint which implements
AuthenticationEntryPoint (override commence mehtod) to handle unauthorized
access or invalid tokens.

NOTE:-
Server-Side: The Spring Boot server handles generating, issuing, and validating
JWT tokens.

Client-Side: The mobile app securely stores and uses these tokens to authenticate
requests to the server. This involves using secure storage mechanisms provided by
the operating system, such as EncryptedSharedPreferences , KeyStore on Android.

136
################### DEPLOYMENT PROCESS #################

1. Code Repository (GitHub)


⦁ We write and push code to a GitHub repository.
⦁ The repository contains the Java application source code, configuration files,
Dockerfile, and Kubernetes manifests.

2. Continuous Integration (Jenkins)


Jenkins is used to automate the build, test, and deployment processes.
Pipeline Setup:
Create a Jenkins Job:
⦁ Configure it to pull the latest code from the GitHub repository.
⦁ Use a Jenkinsfile (Pipeline as Code) for managing the pipeline

⦁ ...........................................................

3. Build with Maven


⦁ Maven is used for dependency management, building the application, and
running unit tests.
Commands:
mvn clean package
mvn sonar:sonar -Dsonar.projectKey=java-app Dsonar.host.url=http://sonar-
server:9000 -Dsonar.login=sonar-token

4. Code Quality Analysis (SonarQube)


⦁ Jenkins integrates with SonarQube for static code analysis.
⦁ SonarQube checks for code smells, vulnerabilities, and test coverage.

5. Docker Image Creation


⦁ The Dockerfile in the repository defines how to containerize the application.

⦁ .....................................................................

6. Push to Docker Registry


⦁ Use a private Docker registry to store Docker images.
137
⦁ Configure Jenkins with credentials to authenticate with the registry.

7. Deployment to Kubernetes
⦁ Kubernetes Manifests (YAML files) define the deployment and service for the
application
8. ELK Stack for Logging and Monitoring
⦁ Use Elasticsearch, Logstash, and Kibana (ELK) for centralized logging and
monitoring.
⦁ Configure the application and Kubernetes pods to send logs to ELK.

############################# Junit-5 ###########################


1.Junit-5 And Mockito Annotations
⦁ @Test
Marks a method as a test method.
⦁ @BeforeEach
Executed before each test method. Typically used for setup.
⦁ @AfterEach
Executed after each test method. Typically used for cleanup.
⦁ @BeforeAll
Executed once before all test methods in the class. Must be static.
⦁ @AfterAll
Executed once after all test methods in the class. Must be static.
⦁ @DisplayName
Provides a custom display name for a test class or test method.
⦁ @Nested
Used to signal that the annotated class is a nested, non-static test class.
138
⦁ @Disabled
Disables a test class or test method.
⦁ @ExtendWith
Registers extensions for a test class or test method. Extensions can add
behavior to test methods, like handling setup, teardown, or mocking.
⦁ @Mock
Creates a mock instance of the field it annotates. This mock instance can be
used to stub method calls and verify interactions.
⦁ @Spy
Creates a spy instance of the field it annotates. A spy is a partial mock, which
allows you to call real methods while still being able to stub and verify
interactions.
⦁ @Captor
Creates an ArgumentCaptor instance for capturing method arguments. This is
useful for verifying that certain arguments were passed to mock methods
⦁ @InjectMocks
Injects mock or spy fields into the annotated field. This is useful for testing
classes that have dependencies.
⦁ @ExtendWith(MockitoExtension.class):
This tells JUnit 5 to enable the Mockito extension, which initializes the mocks.
⦁ @SpringBootTest
It is used for integration testing in Spring Boot, starting the entire application
context.

2.Difference between @Mock and @MockeBean


@Mock:
⦁ Used in unit tests.
⦁ Part of Mockito.
⦁ Requires @ExtendWith(MockitoExtension.class) for initialization.
⦁ Scope is within the test class.
@MockBean:

139
⦁ Used in integration tests.
⦁ Part of Spring Boot testing framework.
⦁ Automatically managed by Spring context.
⦁ Scope is within the Spring application context, allowing for injection into
other Spring-managed beans.
Choose @Mock when writing unit tests and you want to mock dependencies
within the test class.

Choose @MockBean when writing integration tests and you want to mock beans
within the Spring application context.

3.Controller Layer Testing


@AutoConfigureMockMvc:- This annotation is used to auto-configure the
MockMvc object, which is used for testing the controller layer.

MockMvc:- MockMvc is a powerful utility provided by Spring Test to test Spring


MVC controllers. It allows you to perform HTTP requests and verify responses
without the need to start a web server.
@Test
public void createUserTest() throws Exception {
UserDto dto = mapper.map(user, UserDto.class);

Mockito.when(userService.createUser(Mockito.any())).thenReturn(d
to);
//actual request for url

this.mockMvc.perform(MockMvcRequestBuilders.post("/users")
.contentType(MediaType.APPLICATION_JSON)
.content(convertObjectToJsonString(user)
)
.accept(MediaType.APPLICATION_JSON))
.andDo(print())
.andExpect(status().isCreated())
.andExpect(jsonPath("$.name").exists());
}

@Test

140
public void updateUserTest() throws Exception {

// /users/{userId} + PUT request+ json

String userId = "123";


UserDto dto = this.mapper.map(user, UserDto.class);

Mockito.when(userService.updateUser(Mockito.any(),
Mockito.anyString())).thenReturn(dto);

this.mockMvc.perform(
MockMvcRequestBuilders.put("/users/" +
userId)
.header(HttpHeaders.AUTHORIZATION,
"Bearer
eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJkdXJnZXNoQGRldi5pbiIsImlhdCI6MTY
3NTI0OTA0MywiZXhwIjoxNjc1MjY3MDQzfQ.HQbZ4BrQlAgd5X40RZJhSMZ0zgZA
fDcQtxJaSy97YZHgdNBV0g2r7-ZXRmw1EkKhkFtdkytG_E6I7MnsxVEZqg")
.contentType(MediaType.APPLICATION_J
SON)
.content(convertObjectToJsonString(u
ser))
.accept(MediaType.APPLICATION_JSON)
)
.andDo(print())
.andExpect(status().isOk())
.andExpect(jsonPath("$.name").exists());
}

MockMvc Components
⦁ MockMvcRequestBuilders:
This is a factory class for creating RequestBuilder instances for different HTTP
methods (GET, POST, PUT, DELETE, etc.).

⦁ MockMvcResultMatchers:
This is a factory class for creating ResultMatcher instances to verify response
status, headers, content, and more.

⦁ MockMvcResultHandlers:
141
This is a factory class for creating ResultHandler instances to perform actions
on the result, such as printing the response

4.Service Layer Testing


@SpringBootTest
public class UserServiceTest {
@MockBean
private UserRepository userRepository;
@MockBean
private RoleRepository roleRepository;
@Autowired
private UserService userService;
User user;
Role role;
String roleId;
@Autowired
private ModelMapper mapper;
@BeforeEach
public void init() {
role = Role.builder().roleId("abc").roleName("NORMAL").build();
user = User.builder()
.name("Durgesh")
.email("durgesh@gmail.com")
.about("This is testing create method")
.gender("Male")
.imageName("abc.png")
.password("lcwd")
.roles(Set.of(role))
.build();
roleId = "abc";
}
//create user
@Test
public void createUserTest() {
Mockito.when(userRepository.save(Mockito.any())).thenReturn(user);

Mockito.when(roleRepository.findById(Mockito.anyString())).thenReturn(Optio
nal.of(role));
UserDto user1 = userService.createUser(mapper.map(user,
UserDto.class));
// System.out.println(user1.getName());
Assertions.assertNotNull(user1);
Assertions.assertEquals("Durgesh", user1.getName());
}

@Test

142
public void deleteUserTest() {
String userid = "userIdabc";
Mockito.when(userRepository.findById("userIdabc")).thenReturn(Optional.of(u
ser));
userService.deleteUser(userid);
Mockito.verify(userRepository, Mockito.times(1)).delete(user);
}
}

5. Difference between Mock and PowerMock


Mockito: Best for most use cases, focusing on mocking interfaces and verifying
interactions. It is simple, easy to use, and sufficient for testing most Java code.

PowerMock: Used when more advanced features are needed, such as mocking
static methods, constructors, and private methods. It is more powerful but also
more complex and should be used cautiously.
@RunWith(PowerMockRunner.class)
@PrepareForTest(Utils.class)
public class MyServiceTest {
@InjectMocks
private MyService myService;

@Test
public void testProcess() {
// Mock the static method
PowerMockito.mockStatic(Utils.class);
when(Utils.staticMethod("test")).thenReturn("Mocked value");

// Call the method to be tested


String result = myService.process("test");

// Verify the result


assertEquals("Mocked value", result);

// Verify that the static method was called


PowerMockito.verifyStatic(Utils.class);
Utils.staticMethod("test");
}
}

@RunWith(PowerMockRunner.class): This tells JUnit to use PowerMock's test


runner.

@PrepareForTest(Utils.class): This tells PowerMock to prepare the Utils class for


143
testing, allowing it to mock static methods.

PowerMockito.mockStatic(Utils.class): This mocks all static methods in the Utils


class.

when(Utils.staticMethod("test")).thenReturn("Mocked value"): This sets up the


mock to return "Mocked value" when Utils.staticMethod("test") is called.

6. What is Parameterized Testing ?


Parameterized test in JUnit allows to run the same test method multiple times
with different sets of parameters. This is useful for testing a method or
functionality with a variety of input values to ensure it behaves correctly in
different scenarios.

@ParameterizedTest: Marks a method as a parameterized test.

@ValueSource: Provides simple values for the test method.

@MethodSource: Provides complex data via a method.

@CsvSource: Provides data in CSV format.

####################### Swagger(OpenApi) ########################

1. Swagger:- Swagger ek open-source framework hai jo RESTful web services ko


design, build, document, aur consume karne ke liye use hota hai. Yeh API
development ko simplify karta hai aur API implementations mein consistency aur
quality ensure karta hai.

2.Swagger ke Key Components


1. Swagger Specification (OpenAPI Specification):
Swagger ka core hai OpenAPI Specification (OAS), jo REST APIs ko describe karne ka
ek standard format hai.

Yeh endpoints, request/response formats, authentication methods, aur API ke


doosre aspects ko machine-readable JSON ya YAML format mein define karta hai.

144
2. Swagger Editor:
Ek online tool ya local installation jo developers ko OpenAPI specifications create
aur edit karne deta hai.

Real-time feedback aur error checking provide karta hai taaki API specification sahi
tarike se format ki gayi ho.

3. Swagger UI:
Ek web-based interface jo OpenAPI Specification se automatic documentation
generate karta hai.

Users ko browser se directly API ke saath interact karne deta hai, alag-alag
endpoints aur methods test karne ka mauka milta hai.

4. Swagger Codegen:
Ek tool jo OpenAPI Specification se client libraries, server stubs, API
documentation, aur configuration automatically generate karta hai.

Alag-alag programming languages aur frameworks ko support karta hai, API ko


different tech stacks mein integrate karna aasaan banata hai.

5. SwaggerHub:
Ek collaborative platform jo APIs ko design, document, aur manage karne ke liye
use hota hai.

Teams ko APIs par kaam karne ke liye ek centralized jagah provide karta hai, jisse
development lifecycle ke dauran consistency aur collaboration ensure hoti hai.

3.Common Swagger Annotations


1. @Api:
API ke class ko annotate karne ke liye use hota hai.
2. @ApiOperation:
Specific HTTP request ko describe karne ke liye use hota hai.
3. @ApiParam:
Method parameters ko describe karne ke liye use hota hai.
4. @ApiModel:
Models ko define karne ke liye use hota hai jo request aur response payloads
145
mein use hote hain.
5. @ApiModelProperty:
Model properties ko describe karne ke liye use hota hai.
6. @ApiResponses:
Multiple responses ko describe karne ke liye use hota hai.
7. @ApiResponse:
Specific response ko describe karne ke liye use hota hai.
EXAMPLE:-
@Api(value = "User Management System")
@RestController
@RequestMapping("/api/v1")
public class UserController {

@ApiOperation(value = "Get a list of users", response =


List.class)
@GetMapping("/users")
public ResponseEntity<List<User>> getAllUsers() {
// Dummy data for example
List<User> users = new ArrayList<>();
users.add(new User(1L, "John Doe"));
users.add(new User(2L, "Jane Doe"));
return ResponseEntity.ok(users);
}

@ApiOperation(value = "Get a user by Id", response =


User.class)
@ApiResponses(value = {
@ApiResponse(code = 200, message = "Successfully
retrieved user"),
@ApiResponse(code = 404, message = "User not found")
})
@GetMapping("/users/{id}")
public ResponseEntity<User> getUserById(
@ApiParam(value = "ID of the user to retrieve",
required = true) @PathVariable("id") Long id) {
// Dummy data for example
User user = new User(id, "John Doe");
return ResponseEntity.ok(user);
146
}
}

@ApiModel(description = "Details about the user")


class User {
@ApiModelProperty(notes = "The unique ID of the user")
private Long id;

@ApiModelProperty(notes = "The user's name")


private String name;

// Constructor, getters, and setters

############### How to Connect to S3 bucket ################


1. Add the AWS SDK Dependency
2. Set Up AWS Credentials
The AWS SDK uses credentials to authenticate with S3. Like
⦁ AWS_ACCESS_KEY_ID: Your AWS access key.
⦁ AWS_SECRET_ACCESS_KEY: Your AWS secret key.
3. Create S3 Client
⦁ Use S3Client.builder() to configure and build an S3 client.
⦁ Specify the AWS region and credentials provider.
4. List Objects:
⦁ Use listObjectsV2 to retrieve a list of objects in the bucket.
5. Upload a File:
⦁ Use putObject to upload a file to the bucket.
⦁ Specify the bucket name, object key, and file path.
6. Download a File:
⦁ Use getObject to download a file from the bucket.
⦁ Specify the bucket name, object key, and download path.

############### Send SMS Using Twilio WhatsApp API ##########


Step 1: Create a Twilio Account

147
⦁ Get your Twilio Account SID, Auth Token, and WhatsApp-enabled phone
number.
Step 2: Add Twilio SDK to Your Spring Boot Project
Step 3: Configure Twilio Credentials
⦁ Add your Twilio credentials to the application.properties file:
⦁ twilio.account-sid=your-account-sid
⦁ twilio.auth-token=your-auth-token
⦁ twilio.whatsapp-number=whatsapp:+14155238886 # Twilio's sandbox
WhatsApp number
Step 4: Send a WhatsApp Message
⦁ Create a service to send WhatsApp messages:
⦁ Initialize Twilio with credentials & Create and send an SMS message
Step 5: Test the Integration
⦁ Create a REST controller to trigger the service:

############### Send SMS by Twillio Integration ###############


Step 1: Set Up a Twilio Account
⦁ Get your Account SID, Auth Token, and a Twilio Phone Number (you can get
a free one for testing).
Step 2: Add the Twilio SDK Dependency
Step 3: Configure Twilio Credentials
⦁ Add your Twilio credentials to the application.properties file:
⦁ twilio.account-sid=your-account-sid
⦁ twilio.auth-token=your-auth-token
⦁ twilio.phone-number=+1234567890 # Your Twilio phone number
Step 4: Create a Twilio SMS Service
⦁ Create a service to handle SMS sending:
⦁ Initialize Twilio with credentials & Create and send an SMS message
Step 5: Create a REST Controller

148
⦁ Expose an endpoint to send SMS messages:

############ Integrate SendGrid for Email (Twillio) #########


Step 1: Set Up a SendGrid Account
⦁ Sign up at SendGrid.
⦁ Create an API Key with the required permissions for sending emails.
⦁ Verify your sender email address or domain.
Step 2: Add SendGrid Dependency
Step 3: Configure SendGrid API Key
⦁ Add the SendGrid API key to your application.properties:
⦁ sendgrid.api-key=your-sendgrid-api-key
Step 4: Create a SendGrid Email Service
⦁ Create a service to send emails using SendGrid:
Step 5: Create a REST Controller
⦁ Expose an endpoint to send emails:

############### Stripe PAYMENT INTEGRATION ################


Step 1: Set Up Stripe Account
⦁ Create a Stripe Account: Sign up for a Stripe account at Stripe.
⦁ Get API Keys: After logging in, navigate to the Developers section to get your
publishable key and secret key. These keys are essential for making API calls.
Step 2: Add Stripe Dependencies
Step 3: Configure Stripe Keys
Store your Stripe API keys in your application.properties
stripe.api.key=your_secret_key_here
stripe.publishable.key=your_publishable_key_here
Step 4: Create a configuration class to initialize Stripe with your secret
key:
@Configuration
public class StripeConfig {
@Value("${stripe.api.key}")

149
private String stripeApiKey;
@PostConstruct
public void init() {
Stripe.apiKey = stripeApiKey;
}
}
Step 5: Create a service class to handle payment processing:
@Service
public class PaymentService {

public PaymentIntent createPaymentIntent(int amount, String


currency, String paymentMethodType) throws StripeException {
Map<String, Object> params = new HashMap<>();
params.put("amount", amount);
params.put("currency", currency);
params.put("payment_method_types", new String[]
{paymentMethodType});

PaymentIntent paymentIntent =
PaymentIntent.create(params);
return paymentIntent;
}
}
Step 6: Create a controller to handle payment requests:
@RestController
@RequestMapping("/api/payment")
public class PaymentController {
@Autowired
private PaymentService paymentService;
@PostMapping("/create-payment-intent")
public PaymentIntent
createPaymentIntent(@RequestParam int amount,
@RequestParam String currency, @RequestParam String
paymentMethodType) throws StripeException {
return
paymentService.createPaymentIntent(amount, currency,
paymentMethodType);
}
}
Step 7: Handle Payment on Frontend

150
################### Project Questions ######################
1. Explain where you use List, set , map in your Project.
A. List :- List is an ordered collection that allows duplicate elements. It is typically
used when the order of elements is important or when duplicates are allowed.
# Usages in HostelWorld.com:-
⦁ Search results: When customers search for hotels, the results can be stored
in a List to maintain the order based on relevance or any applied sorting
criteria.
⦁ Customer reviews: A List of review objects can store multiple reviews for a
single hotel, preserving the order in which reviews were added.
⦁ Maintaining User Preferences: Store a list of user preferences or recently
viewed items in the order they were accessed.
B. Set:- Set is an unordered collection that does not allow duplicate elements. It is
used when uniqueness is a key requirement.
# Usages in HostelWorld.com:-
⦁ Unique customer IDs: A Set can store customer IDs to ensure there are no
duplicates.
⦁ Cities with available hotels: To keep track of all cities where hotels are
available without duplicates.
⦁ Storing Unique Search Keywords: Collect unique search keywords entered by
users to analyze popular search terms.
C. Map :- Map is a collection that maps keys to values, with no duplicate keys
allowed. It is used for fast lookups based on unique keys.
# Usages in HostelWorld.com:-
⦁ Hotel details: A Map where the key is the hotel ID and the value is the hotel
object can quickly retrieve hotel information.
⦁ Booking history: A Map where the key is the customer ID and the value is a
list of bookings to quickly find all bookings made by a customer.
⦁ Customer information: A Map where the key is the customer ID and the
value is the customer object for fast access to customer details.
151
2. Explain where you use HashMap and ConcurrentHashMap Your
Project.
A. HashMap :- HashMap is best suited for scenarios where you need a fast, non-
thread-safe collection for storing key-value pairs. It is ideal for read-heavy
operations where concurrency is not a concern.
# Usages in HostelWorld.com:-
We can use HashMap to store static data like room types, hotel categories, or
payment statuses, which are read frequently but rarely modified.

B. ConcurrentHashMap :- ConcurrentHashMap is designed for concurrent access,


providing thread-safety without the need for external synchronization. It is
suitable for scenarios where multiple threads may read from and write to the map
concurrently.
# Usages in HostelWorld.com:-
Real-time Inventory Management:- If we needs to track and update inventory
levels (e.g., available flight seats) in real-time, ConcurrentHashMap can be used to
safely update inventory counts across multiple threads.
Benefit: Allows multiple threads to update inventory counts concurrently, ensuring
data consistency and thread safety.

3. Explain Where you usage Threading in your Project.


⦁ Handling Concurrent Requests: When multiple users are making search
queries or booking requests simultaneously, threading help to handle these
requests concurrently, ensuring faster response times.
⦁ Background Tasks: Tasks such as data synchronization, cache updates, and
report generation performed in the background without blocking the main
application flow.
⦁ Real-time Notifications: Sending real-time notifications to users (e.g., flight
updates, booking confirmations) done by using threading to ensure quick
delivery without blocking other operations.
⦁ Concurrent Data Fetching: Fetching data from multiple sources (e.g., flight

152
APIs, hotel databases) concurrently to improve the performance of search
operations.

4. Explain where you usage Stram APi in your Project.


⦁ Processing Search Results: When users perform searches, you might need to
filter, sort, and process the search results.
⦁ Aggregating User Reviews: Aggregate user reviews to calculate average
ratings for hotels, flights, or other services.
⦁ Generating Reports: Generate reports by grouping and summarizing data,
such as total bookings per destination.
⦁ Filtering and Collecting User Preferences: Filter user preferences and collect
them into a set to ensure uniqueness.
5. Explain where you usage Time APi in your Project.
⦁ Booking and Reservation Timestamps: Record and manage booking
timestamps to track when bookings were made.
⦁ Handling Time Zones: Manage times across different time zones, such as
converting local times to UTC.
⦁ Recurring Events and Reminders: Schedule recurring events, such as sending
reminders for upcoming flights.

6. Explain where you usage Circuit Breaker in your Project.


⦁ Handling External API Failures:
# I used the circuit Breaker Concept in Payment handling, where i used 3rd
party payment api integration . In Normal Flow: User initiates a process that
includes payment then system calls the external payment API to process the
payment and If the payment is successful, the process completes
successfully.If the external payment API fails (due to being down, slow) Then
Circuit Breaker Come in to picture. When system encounters failures while
calling the payment API, it keeps trying up to a certain threshold.After
repeated failures, the circuit breaker "opens," which means it stops calling
the payment API directly to prevent further overload.Instead of failing the
entire process immediately, the circuit breaker allows you to implement a
153
fallback. This ensures that the system remains functional and can recover
from failures more gracefully. for that i used Intermediate Storage where i
temporarily store the payment requests when the external payment service
is down.{ this storage can be SQL(databse) or Apache Kafka(Message
Queue)}
Storing in a Database
public String paymentFallback(PaymentRequest request, Throwable t) {
// Save the payment request to the database
PaymentRequestEntity entity = new PaymentRequestEntity();
entity.setUserId(request.getUserId());
entity.setAmount(request.getAmount());
entity.setStatus("PENDING");
paymentRequestRepository.save(entity);

return "Payment is pending and will be processed soon.";


}
Storing in a Message Queue
public String paymentFallback(PaymentRequest request, Throwable t) {
// Send the payment request to a message queue
kafkaTemplate.send("payment_requests", request);

return "Payment is pending and will be processed soon.";


}

# Provide Reapply Logic where we provide a method to handle payment


requests that failed due to an unavailable payment API. In this we used
⦁ Polling Mechanism : To Regularly checks for pending payment requests
and
⦁ Retry Mechanism: Where we send the message to user for Re-
attempts payment when the API is available.

⦁ Database Connection Failures: In my project we used Redis cache for store


the static data like country name, country code, popular destinations, top-
rated hotels, and then If cache service is down or fail then we call the
database in fallback method.

7. Explain where you usage these Design Patten in your Project.

154
1. Singleton Design Pattern
It ensures a class only has one instance, and provides a global point of access to it.

# Key Component of Singleton Method Design Pattern:


Static Member: This static member ensures that memory is allocated only once,
preserving the single instance of the Singleton class.

Private Constructor: The Singleton pattern incorporates private constructor, which


serves as a barricade against external attempts to create instances of the Singleton
class. This ensures that the class has control over its instantiation process.

Static Factory Method: A crucial aspect of the Singleton pattern is the presence of
a static factory method. This method acts as a gateway, providing a global point of
access to the Singleton object. When someone requests an instance, this method
either creates a new instance (if none exists) or returns the existing instance to the
caller.

# Rules To Develop Singleton Java Class:


1. Declare a private static reference variable to hold current class Object,This
reference will hold null only for the first time, after then it will refer to the object
forever(till JVM terminates).We will initialize this reference using static factory
method as discussed in step 3.

2. Declare all the constructor as private, So that its object cannot be created from
outside of the class using new keyword.

3. Develop a static final factory method, Which will return a new object only for
the first time and the same object will be returned then after. Since we have only
private constructor, we cannot use new keyword from outside of the program, we
must declare this method as static so that it can be accessed directly using Class
Name. Declare this final so that the child class will have no option to override and
change the default behavior.

4. Make Your Singleton class Reflection AP! proof,


We know that Reflection API can access the private variables, methods and
constructors of the class, hence even if your constructor is private,we can still

155
create the object of that class. To prevent this declare an instance boolean variable
initially holding true. Change its value to false, immediately when constructor is
called for the first time.Then after when even the constructor is called for 2nd
time, it should throw SomeException saying object cannot be created for multiple
times. This approach also removes the Double Checking Problem in case. of
Multiple thread trying to create object at the same time, which we will discus later.

5. Make Your factory Method Thread Safety, So that Only one object is created
even if more than 1 thread tries to call this method simultaneously. Declare the
whole method as synchronized method, or use synchronized block, Instead of
making the whole factory method as synchronized method, it is good to place only
the condition check part in synchronized block.

we have a problem with the above approach, after the first call to the
getInstance(), in the next calls to the same getinstance() method,the method will
check for instance == null check, while doing this check, it acquires the lock to
verify the condition, which is not required. Acquiring and releasing locks are quiet
costly and we must try to avoid them as much as we can. To solve this problem we
can have double level checking (2 times null checking) for the condition.

It is good practice to declare the static member-instance as volatile to avoid


problems in a multi-threaded environment.

Note: If you have used the Reflection Proof logic, then no need to worry about the
2" null check. Because when we call the constructor for 2nd time, it will throw
InstantiationError.

6. Prevent Your Singleton Object from De-serialization, If you need your singleton
object to send across the network, Your Sifigleton class must implement
Serializable interface.But problem with this approach is we can de-serialize it for N
number of times, and each deserialization process will create a brand new
object,which will violate the Singleton Design Pattern. In order to prevent multiple
object creation during deserialization process, override readResolve() and return
the same object. readResolve() method is called internally in the process of
deserialization. It is used to replace de-serialized object by your choice.

156
Note : Ignore this process if your class does not implement Serialization interface
or indirectly. Indirectly means the super class or super interfaces has not
implement/extended Serializable interface.

7. Prevent Your singleton Object being Cloning, If your class is directly child of
Object class, then I will suggest not to implement Clonable interface, as there is no
meaning of cloning the singleton object to produce duplicate objects out of it. Both
are opposite to each other. However if Your class is the child of some other class or
interface and that class or interface has implemented/extended Cloneable
interface, then it is possible that somebody may clone your singleton class thereby
creating many objects. We must prevent this as well. Override clone() in your
singleton class and return the same old object. You may also throw
CloneNotSupportedException.

public class Singleton implements Serializable, Cloneable {


private static final long serialVersionUID = 1L;

// Rule 1: Static variable to hold a single instance


private static volatile Singleton instance;

// Rule 2 & 4: Private constructor to prevent instantiation


private Singleton() {
if (instance != null) {
throw new RuntimeException("Reflection is not
allowed to create a new instance");
}
}
// Rule 3 & 5: Static factory method with thread safety
public static Singleton getInstance() {
if (instance == null) { // First check
synchronized (Singleton.class) {
if (instance == null) { // Second check
instance = new Singleton();
}
}
}
return instance;

157
}
// Rule 6: Prevent deserialization
protected Object readResolve() {
return getInstance();
}
// Rule 7: Prevent cloning
@Override
protected Object clone() throws CloneNotSupportedException {
throw new CloneNotSupportedException("Cloning is not
allowed for Singleton");
}
}

Advantages:
⦁ Single Instance ensures that only one instance of the class exists throughout
the application's lifetime.
⦁ Global Access provides a centralized point for accessing the instance,
facilitating easy communication and resource sharing.

Disadvantages:
⦁ Global State: Can introduce global state, affecting testability.
⦁ Limited Extensibility: Hard to subclass or mock for testing.
⦁ Violates Single Responsibility Principle: Combines instance management
with class logic.

Examples:
⦁ Logging: Centralized logging across the application.
⦁ Database Connection Pool: Managing shared database connections.
⦁ Caching: Maintaining a single cache instance.
⦁ Configuration Management: Global application settings.
⦁ Thread Pools: Managing a limited set of worker threads.

2. Abstract Factory Method Design Pattern


The Abstract Factory Pattern is a creational design pattern that provides an
interface for creating families of related or dependent objects without specifying
158
their concrete classes. It helps in maintaining consistency across related objects
and ensures scalability.
# In my Project i used this Pattern on many places :-
⦁ Room Booking System: Create different room types (DeluxeRoomFactory,
SuiteRoomFactory).
⦁ Notification Service: Different notification types (EmailFactory, SMSFactory,
PushNotificationFactory).
⦁ Report Generation: Generate different reports (PDFReportFactory,
ExcelReportFactory).

3. Strategy Design Pattern (Behavioral Pattern) :-


# Strategy pattern is all about implementing 3 principles.
1. Prefer composition over inheritance.
2. Always code to interfaces and never code with implementationclasses
3. Code should be open to extension and must be closed for modification

The Strategy Pattern is used when we want to define a family of algorithms


(strategies) and make them interchangeable at runtime. This pattern helps in
writing flexible, maintainable, and extensible code by avoiding multiple if-else or
switch statements.
Why Use Strategy Pattern?
⦁ Removes if-else complexity :- Avoids hardcoded logic for different behaviors.
⦁ Encapsulation :- Each strategy (algorithm) is encapsulated in a separate class.
⦁ Interchangeable Behaviors :- Algorithms can be changed at runtime without
modifying existing code.
⦁ Open/Closed Principle :- Easily add new strategies without modifying the
existing system.
Strategy Pattern in a Hotel Booking System
Scenario: A hotel offers different discount strategies for customers:
⦁ Regular Discount (10%)
⦁ Seasonal Discount (20%)

159
⦁ No Discount
Instead of using multiple if-else conditions, we can use the Strategy Pattern.

4. Builder Design Pattern?


The Builder Pattern is a creational design pattern that helps construct complex
objects step by step. Instead of using a constructor with too many parameters, it
provides a flexible way to create objects. It avoids long constructors and allows
method chaining for better readability. It makes objects immutable and easy to
modify. In your hotel booking system, it's perfect for creating booking requests,
rooms, or customer profiles.

# In my Project i used this Pattern on many places :-


In a hotel booking system, a HotelBooking object may have multiple optional and
required attributes, such as:
⦁ Mandatory Fields: hotelName, customerName, checkInDate, checkOutDate
⦁ Complex Search Queries :- like if users want perform complex searches with
multiple criteria (e.g., destination, dates, budget, preferences), so a Builder
pattern used here to construct these search queries in a flexible and readable
manner.
⦁ Optional Fields: roomType, breakfastIncluded, specialRequests,
discountCode
⦁ Room Object Creation: Construct a Room object with different features (AC,
WiFi, View, etc.).
⦁ Customer Profile: Build a complex user profile with optional fields.
⦁ Invoice Generation: Create an invoice with optional discounts, taxes, etc.
Using the Builder Pattern, we can create HotelBooking objects without needing
long constructors.
5. Prototype Design Pattern?
The Prototype Pattern is a creational design pattern used to create new objects by
copying an existing object instead of creating a new instance from scratch.
It is useful when:
⦁ Object creation is costly (e.g., fetching from a database).
⦁ We need many instances with slight modifications.
160
⦁ We want to clone an object instead of reconstructing it.

# In my Project i used this Pattern on many places :-


⦁ Room Templates : Clone predefined room configurations (DeluxeRoom,
SuiteRoom).
⦁ Invoice Generation : Duplicate an invoice and modify customer details.
⦁ Discount Coupons : Clone a standard discount voucher for different
customers.

6. CQRS Design Pattern


CQRS (Command Query Responsibility Segregation) is an architectural pattern
that separates:
⦁ Commands (Write Operations) → Responsible for modifying data (Create,
Update, Delete).
⦁ Queries (Read Operations) → Responsible for retrieving data (Read
operations).
This pattern is useful for improving scalability, performance, and security by
handling reads and writes separately.
# In my Project i used this Pattern on many places :-
⦁ User Profile Updates: I Separate the logic for updating user profiles
(commands) from the logic for reading user profile information (queries).
⦁ Notification Systems: Separate the process of sending notifications
(commands) from the process of querying notification history or statuses
(queries).
⦁ Booking Management: In my project i seperate the process of Booking from
the process of viewing booking details.
⦁ Room Availability: Commands update room availability, queries fetch
available rooms.
⦁ Invoice Processing: Write service handles invoice generation, query service
retrieves past invoices.

7. Saga Design Pattern :-

161
The Saga Pattern is a distributed transaction management pattern that ensures
data consistency across multiple microservices. Instead of a single database
transaction, a saga represents a sequence of steps, where each step is either
committed or compensated (rolled back) if something fails.

Scenario: Hotel Room Booking


A hotel booking process involves multiple microservices:
1. Booking Service → Creates the booking.
2. Payment Service → Processes the payment.
3. Inventory Service → Reserves the room.
4. Notification Service → Sends a confirmation email.
If any step fails, previous steps must be rolled back (e.g., refund payment, release
room).
# Saga Implementation Approaches
There are two main ways to implement the Saga Pattern:
1. Choreography (Mostly Used {Kafka})→ Each microservice listens to events
and reacts accordingly.
2. Orchestration → A central Saga Coordinator controls the workflow.
1. Saga Pattern Implementation (Choreography )
A Saga Pattern is a distributed transaction management technique used in
microservices architectures. It ensures eventual consistency by breaking a large
transaction into multiple smaller, independent transactions that communicate
asynchronously.
Choreography Saga means:
⦁ Each microservice listens for events and reacts accordingly.
⦁ There is no central orchestrator; microservices communicate through events.
⦁ It is ideal when transactions are loosely coupled and services are
autonomous.
Use Case: Hotel Booking System
Business Flow:
1. Customer requests a hotel booking.
2. Booking Service initiates the request and publishes a booking-created event.
3. Inventory Service listens to booking-created, checks room availability, and

162
publishes either:
✅ room-available → If rooms are available, the process continues.
❌ room-not-available → Booking is canceled.
4. Payment Service listens to booking-created, processes the payment, and
publishes either:
✅ payment-processed → If successful, the process continues.
❌ payment-failed → The refund process is triggered.
5. If payment is successful, Inventory Service reserves a room (room-reserved
event).
6. Loyalty Service listens for room-reserved and adds customer reward points
(loyalty-points-added).
7. Notification Service sends a confirmation message (booking-confirmed).
8. Audit Service logs all events.
9. If payment fails, the Refund Service handles refunds (refund-processed).
Advantages of Choreography-Based Saga
⦁ Decentralized Coordination : No single point of failure
⦁ Scalability : Each microservice operates independently
⦁ Fault Tolerance : If one service fails, others continue processing
⦁ Loose Coupling : Microservices only communicate via events
⦁ Resilience : Compensating transactions can reverse failed processes

FYI: For Your Information


OOO: Out Of Office

163
KT: Knowledge Transfer
EOD: End Of Day
DND: Do Not Disturb
SME: Subject Matter Expert
POC: Proof Of Concept (or) Point Of Contact (context-specific)
QQ: Quick Question
BRB: Be Right Back
IMO: In My Opinion
IDK: I Don’t Know
OOTB: Out Of The Box
KPI: Key Performance Indicator
FYR: For Your Reference
WIP: Work In Progress
TBD: To Be Determined
TBA: To Be Announced
TL;DR: Too Long; Didn’t Read
ETA: Estimated Time of Arrival
AFK : Away From Keyword

164

You might also like