My Question Bank
My Question Bank
# HTTP Methods
=> Every REST API method should be mapped to HTTP Method.
GET --> To get resource/data from server
POST --> To insert/create record at server
PUT --> To update data at server
DELETE --> To delete data at server
200 OK
201 Created
400 Bad Request
1
401 Unauthorized
403 Forbidden
404 Not Found
405 Method Not Allowed
408 Request Timeout
413 Payload Too Large
429 Too Many Requests
500 Internal Server Error
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
508 Loop Detected
NOTE:
1. If we create an object for parent class then we can access only the
members of Parent class.
2. but if we create an object for child class then we can access the members
of Both Parent class and Child class.
NOTE:
-> When we declare a class and we are not extending from any class then
compiler will create the class by extending from Object class automatically.
-> but we declare a class by extending from any one class then compiler
won’t extend from Object class, hence we call Object class as java's super
most class
2
1. Why Java is Platform-Independent?
Java is platform-independent because it is compiled to a bytecode that can be run
on any device that has a Java Virtual Machine (JVM). So we can write a Java
program on one platform (such as Windows) and then run it on a different
platform (such as macOS or Linux) without making any changes to the code.
2. Why Java is Architecture-Independent?
The JVM is the key component that enables architecture independence in Java. It
acts as an intermediary between the bytecode and the machine's architecture. The
JVM is platform-specific, but it provides a consistent execution environment
regardless of the underlying hardware or operating system.
private : Private variables or methods may be used only by an instance of the same
class that declares the variable or method, A private feature may only be accessed
by the class that owns the feature.
protected : Is available to all classes in the same package and also available to all
subclasses of the class that owns the protected feature. This access is provided
even to subclasses that reside in a different package from the class that owns the
protected feature.
What you get by default ie, without any access modifier (ie, public private or
protected). It means that it is visible to all within a particular package.
⦁ Widening:- Widening means converting the value from lower data type into
higher data type.
⦁ Narrowing:- narrowing means converting the higher data type value into
smaller data type.
# Type casting w.r.t reference types
⦁ Up casting:- Up casting means storing the child class object into the parent
class reference.
5
⦁ Down casting:- Down casting means storing the Parent class object into the
child class reference.
# Boxing: converting a primitive value into object is called boxing. from java 1.5
onwards this procedure is done automatically by compiler hence it is called auto
boxing.
# Unboxing: converting an object value into primitive type is called un boxing.
from java 1.5 onwards this procedure is automatically done by compiler hence it is
called auto un boxing
6
9. Blocks in java:-
-> Block means some part or some piece of information or some piece of code
-> In java program we can write 2 types of blocks
1) instance block
2) static block
1. Instance Block:-
-> If you want to execute some piece of code when object is created then we can
go for instance block
-> Instance block will be executed before constructor execution
syntax:
{
// stmts
}
2. static Block:-
-> If you want to execute some piece of code when class is loaded into JVM then
we can go for static block
-> static block will execute before main ( ) method execution
syntax:
static
{
// stmts
}
10. What is static control flow & instance control flow in java program ?
1. Static Control Flow:-
-> When class is loaded into JVM then static control flow will start
-> When we run java program, JVM will check for below static members & JVM will
allocate memory for them in below order
a) static variables
b) static methods
c) static blocks
-> Once memory allocation completed for static members then it will start
execution in below order
7
a) static block
b)static method(if we call)only main method will execute automticaly by jvm
c) static variable
-> static variables can be accessed directly in static blocks and static methods.
Note: If we want to access any instance method or instance variable in static area
then we should create object and using that object only we can access. We can't
access directley without object.
2. Instance Control Flow:-
-> instance means Object
-> Instance control flow will begin when object is created for a class
-> When Object is created then memory will be allocated for
a) instance variables
b) instance methods
c) instance blocks
-> Once memory allocation completed then execution will happen in below order
a) instance block
b) constructor
c) instance methods (if we call)
Note: static members can be access directly in instance areas because for static
members memory already allocated at the time of class loading.
8
# Is-A relationship:- if class is extending from another class then it is called as is-
A relationship
-> Here we can access class1 information inside the class2 directly without creating
as Object for class1
# Has-A relationship:- If a class contains other class Object, then it is called as
Has-A relation.
-> Here we can access class1 information inside the class2 only by using object of
class1.
12. SOLID OOPs Principle
=> The main aim of above SOLID OOPs Principles to make our code more readable,
maintainable and loosely coupled.
S-> Single Responsibility
=> A class should have only one responsibility.
=>if we have to write the code for two works 1. Generate Excel Report 2.Generate
Pdf Report then we need to make two seperate class for Pdf and Excel Report
Generation.
=> Single Responsibility Principle can solve using abstract design pattern.
O-> Open Close Principle
=>Our code should be open for extension and closed for modification
=>If we want to add any other load details then we don't need to Modify the class
logic. instead we have to override the method in new class with extend keyword
and re-write the login according to new load details.
=> Open Close Principle can solve using abstract Factory design pattern and
Strategy design pattern.
9
L-> Liskov's Substitution Principle
=> LSP says that subtypes must be Substitutable for base type.
=> Object of a child class should be as it is substutable in variable of parent class.
=> No change should required in the codebase to accomodate a .............. child
class or you can say child class should not need special treatment.
=> Child class should do exact what the parent class expects.
Note: Inheritance might not be the best way always for Reusability.
Note: Do inheritance if and only if there is a strict is-A relationship.
1. High-level modules (your business logic or important classes) should not depend
on low-level modules (helper classes or concrete implementations).
⦁ Both should depend on abstractions (e.g., interfaces or abstract classes).
2. Abstractions should not depend on details (implementations).
⦁ Details (concrete implementations) should depend on abstractions.
How to Achieve DIP?
⦁ Use interfaces or abstract classes to define contracts.
⦁ Inject dependencies (implementations) into classes via constructor injection,
setter injection, or dependency injection frameworks.
13. Object:-
> Any real-world entity is called as Object
-> Objects exist physically
-> Objects will be created based on the Classes
-> Without having the class, we can't create object (class is mandatory to create
objects)
-> Object creation means allocating memory in JVM
11
-> 'new' keyword is used to create the objects
-> Objects will be created by JVM in the runtime
-> Objects will be created in heap area.
-> If object is not using then garbage Collector will remove that object from heap
-> Garbage Collector is responsible for memory clean-up activities in JVM heap
area.
-> Garbage Collector will remove un-used objects from heap.
-> Garbage Collector will be managed & controlled by JVM only.
Note: Programmer don't have control on Garbage Collector
4.Class<?> getClass()
⦁ Returns the runtime class of this Object.
5.int hashCode()
12
⦁ Returns a hash code value for the object.
6.void notify()
⦁ Wakes up a single thread that is waiting on this object's monitor.
7.void notifyAll()
⦁ Wakes up all threads that are waiting on this object's monitor.
8.String toString()
⦁ Returns a string representation of the object.
9.void wait()
⦁ Causes the current thread to wait until another thread invokes the notify()
method or the notifyAll() method for this object.
Abstract classes are used to define generic types of behaviours at the top of an
object-oriented programming class hierarchy, and use its subclasses to provide
implementation details of the abstract class. When We need of Constructor then
We have to go for Abstract class instead of interface.
Under time slicing, a task executes for a predefined slice of time and then reenters
14
the pool of ready tasks. The scheduler then determines which task should execute
next, based on priority and other factors.
15
# String Manipulations
String class provided several methods to perform operations on Strings
#1) Length:- The length is the number of characters that a given string contains.
String class has a length() method that gives the number of characters in a String.
#2) concatenation:- Although Java uses a ‘+’ operator for concatenating two or
more strings. A concat() is an inbuilt method for String concatenation in Java.
#3) String toCharArray():- This method is used to convert all the characters of a
string into a Character Array. This is widely used in the String manipulation
programs.
#4) String charAt():- This method is used to retrieve a single character from a given
String.
#5) Java String compareTo():- This method is used to compare two Strings. The
comparison is based on alphabetical order. In general terms, a String is less than
the other if it comes before the other in the dictionary
#7) Java String split():- As the name suggests, a split() method is used to split or
separate the given String into multiple substrings separated by the delimiters (“”,
16
“ ”, \\, etc).
#8) Java String indexOf():- This method is used to perform a search operation for a
specific character or a substring on the main String. There is one more method
known as lastIndexOf() which is also commonly used.
indexOf() is used to search for the first occurrence of the character.
lastIndexOf() is used to search for the last occurrence of the character
#9) Java String toString():- The toString() method in Java is used to provide a
string representation of an object.This method returns the String equivalent of
the object that invokes it. This method does not have any parameters.
#10) String replace ():- The replace() method is used to replace the character with
the new characters in a String
#11) Substring():- The Substring() method is used to return the substring of the
main String by specifying the starting index and the last index of the substring.
public class StringManipulation {
// 3. Substring
String substring = str.substring(4, 13);
System.out.println("Substring from position 4 to 13: " + substring);
//Substring from position 4 to 13:o, World!
18
your coding!
-> It is similar to String class in Java both are used to create string, but stringbuffer
object can be changed.
-> It is also thread safe i.e multiple threads cannot access it simultaneously.
#3) reverse():- This method reverses the characters within a StringBuffer object
#4) replace():- This method replaces the string from specified start index to the
end index
#5) capacity():- This method returns the current capacity of StringBuffer object.
19
NOTE:-
⦁ When we want a mutable String without thread-safety then StringBuilder
should be used.
⦁ When we want a mutable String with thread-safety then StringBuffer should
be used
⦁ When we want an Immutable object then String should be used.
20
compare by different members.
=> that class override the compare() and provide the logic for compare.
Usage:
⦁ For primitive types (e.g., int, float, char), == compares the actual values.
⦁ For objects, == compares the memory addresses (references) to determine if
they refer to the same object.
# .equals() Method
Purpose: The .equals() method is used to compare the contents of two objects to
check if they are "equal" in terms of their data, rather than their memory
references.
Usage:
1. For most classes, the .equals() method needs to be overridden to define what
"equal" means for objects of that class. For example:
⦁ In String, .equals() checks if the characters of the strings are the same in
sequence and length.
2. By default, the .equals() method in the Object class behaves the same as ==,
which checks for reference equality (i.e., whether the two references point to
the same object in memory).
1. try block:-
-> It is used to keep risky code
syntax:
try {
// stmts
}
Note: We can't write only try block. try block required catch or finally (it can have
both also)
try with catch : valid combination
try with multiple catch blocks : valid combination
try with finally : valid combination
try with catch & finally : valid combination
only try block : invalid
only catch block : invalid
only finally block : invalid
2. catch :-
-> catch block is used to catch the exception which occurred in try block
-> To write catch block , try block is mandatory
-> One try block can contain multiple catch blocks also
syntax:
try {
// logic
} catch ( Exception e ){
// logic to catch exception info
}
Note: If exception occurred in try block then only catch block will execute
otherwise catch block will not execute
Note: Catch blocks order should be child to parent
NOTE:
⦁ When we write multiple catch blocks if Exceptions are not having any Is-A
23
relation then we can write catch block in any order otherwise we must write
catch blocks in a order like first child class and followed by parent class.
⦁ We can not write 2 catch blocks which are going to catch same exceptions
3. finally block :-
-> It is used to perform resource clean up activities
Ex: file close, db connection close etc....
-> finally block will execute always ( irrespective of the exception )
try with finally : valid combination
try with catch and finally : valid combination
catch with finally : invalid combination
only finally : invalid combination
24
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
@ExceptionHandler(ResourceNotFoundException.class)
public ResponseEntity<String>
handleResourceNotFoundException(ResourceNotFoundException ex) {
return new ResponseEntity<>(ex.getMessage(), HttpStatus.NOT_FOUND);
}
}
Extending ResponseEntityExceptionHandler is optional. Use it if you need
advanced or consistent handling for predefined exceptions (like
MethodArgumentNotValidException,HttpRequestMethodNotSupportedException)
provided by Spring. If your focus is only on custom exceptions, you can achieve
that without extending it.
GlobalExceptionHandler is a custom class that can handle exceptions globally and
is more flexible in terms of what kind of responses it can generate.
31. What is the purpose of User Defined Exceptions ?
# Purpose of User-Defined Exceptions
⦁ a. Custom Error Handling: They allow developers to handle specific error
conditions that are not covered by the standard Java exceptions. For
example, an application might need to handle a custom business rule
violation or specific domain-related errors.
25
issues.
# Points to Remember
⦁ A resource is an object in a program that must be closed after the program
has finished.
⦁ Any object that implements java.lang.AutoCloseable or java.io.Closeable can
26
be passed as a parameter to try statement.
⦁ All the resources declared in the try-with-resources statement will be closed
automatically when the try block exits. There is no need to close it explicitly.
⦁ We can write more than one resources in the try statement.
⦁ In a try-with-resources statement, any catch or finally block is run after the
resources declared have been closed.
33. If I write return at the end of the try block, will the finally block still
execute?
Yes, even if you write return as the last statement in the try block and no exception
occurs, the finally block will execute. The finally block will execute and then the
control return.
34. If I write System.exit(0); at the end of the try block, will the finally
block still execute?
No. In this case the finally block will not execute because when you say
System.exit(0); the control immediately goes out of the program, and thus finally
never executes.
try {
// Code that might throw exceptions
} catch (ExceptionType1 | ExceptionType2 | ExceptionType3 e) {
// Handle the exceptions
}
27
class Demo {
//variables
// methods
}
36.2. Abstraction
-> Abstraction means hiding un-necessary data and providing only required data
-> We can achieve Abstraction using interfaces & abstract classes
Ex : we will not bother about how laptop working internally
We will not bother about how car engine starting internally
36.3. Polymorphism
-> If any object is exhibiting multiple behaviours based on the Situation then it is
called as Polymorphism.
Ex:- 1 : in below scenario + symbol having 2 different behavirours
10 + 20 ===> 30 (Here + is adding)
"hi" + "hello" ==> hihello (here + is concatinating)
-> Polymorphism is divided into 2 types
1) Static polymorphism / Compile-time Polymorphism
Ex: Overloading
Static: The binding of the method call to the method is fixed at the time the
program is compiled. The compiler knows exactly which method to invoke based
on the parameters.
Compile-time: The method resolution occurs when the code is compiled, not when
the program is executed, hence making it a compile-time decision.
Run-time: The decision of which overridden method to call is made at runtime, not
at compile time.
28
i. Method Overloading:- The process of writing more than one method with same
name and different parameters is called as Method Overloading.
=> When methods are performing same operation then we should give same name
hence it will improve code readability.
Ex:
substring (int start), substring(int start, int end)
void wait(), void wait(long timeout), void wait(long timeout, int nanos)
=> In Method Overloading scenario, compiler will decide which method should be
called.
Example 1 : Object class equals ( ) method will compare address of the objects
where String class equals ( ) method will compare content of the objects. Here
String class overriding equals ( ) method.
36.4. Inheritence
-> Extending the properties from one class to another class is called as Inheritance
-> The main aim of inheritance is code re-usability
Ex: child will inherit the properties from parent
Note: Whenever we create child class object, then first it will execute parent class
zero-param constructor and then it will execute child class constructor. because
Child should be able to access parent properties hence parent constructor will
execute first to initialize parent class properties.
Note: In java, one child can't inherit properties from two parents at a time
30
3.1 ) Predicate & BiPredicate
3.2 ) Consumer & BiConsumer
3.3 ) Supplier
3.4 ) Function & BiFunction
4) forEach ( ) method
5) Optional class (to avoid null pointer exceptions)
6) Date & Time API
7) ****** Stream API ********
8) Method References & Constructor References
9) Spliterator
10) StringJoiner
40.1) Interface changes
1. Interface can have concreate methods from 1.8v
2. Interface concrete method should be default or static
3. interface default methods we can override in impl classes
4. interface static methods we can't override in impl classes
5. We can write multiple default & static methods in interface
6. Default & Static method introduced to provide backward compatibility
31
- No Return Type
Predicate ------> takes inputs ----> returns true or false ===> test ( )
Supplier -----> will not take any input---> returns output ===> get ( )
Consumer ----> will take input ----> will not return anything ===> accept ( )
Function -----> will take input ---> will return output ===> apply ( )
1. Predicate
-> It is predefined Functional interface
-> It is used check condition and returns true or false value
-> Predicate interface having only one abstract method that is test (T t)
2. Supplier Functional Interface
-> Supplier is a predefined functional interface introduced in java 1.8v
-> It contains only one abstract method that is get ( ) method
-> Supplier interface will not take any input, it will only returns the value.
32
3. Consumer Functional Interface
-> Consumer is predefined functional interface
-> It contains one abstract method i.e accept (T t)
-> Consumer will accept input but it won't return anything
Note: in java 8 forEach ( ) method got introduced. forEach(Consumer consumer)
method will take Consumer as parameter.
4. Function Functional Interface
-> Function is predefined functional interface
-> Funcation interface having one abstract method i.e apply(T r)
interface Function<R,T>{
R apply (T t);
}
-> It takes input and it returns output
34
-> Stream API provided several methods to perform Operations on the data.
2. Mapping Operations
-> :- It is used to apply a function to each element in a stream, transforming it into
another object. The result is still a stream, but with the transformed objects.
Ex : Stream map (Function function)
2) limit ( long maxSize ) => Get elements from the stream based on given size
Eg :- names.stream().limit(3).forEach(c -> System.out.println(c));
3) skip (long n) => It is used to skip given number of elements from starting
position of the stream
Eg :- names.stream().skip(3).forEach(c -> System.out.println(c));
Two arguments: Groups elements and applies further processing (e.g., counting or
summing).Eg:-
list.stream().collect(Collectors.groupingBy(Function.identity(),Collectors.counting();
By understanding these operations, you can effectively use the Stream API in Java
to process collections and other data sources in a functional and efficient manner.
38
}
Difference Summary:
⦁ Execution Order: In sequential streams, elements are processed sequentially
(one after the other), whereas in parallel streams, elements are processed
concurrently (using multiple threads).
⦁ Performance: Parallel streams are faster for processing large datasets
because they leverage multi-core processors, but they can create overhead
for small datasets.
⦁ Threading: Sequential streams run on a single thread, whereas parallel
streams use multiple threads.
⦁ Use Case: Sequential streams are suitable for simpler and predictable order
processing. Parallel streams are suitable for large datasets and performance-
intensive tasks.
Cautions:
⦁ When using parallel streams, it is crucial to consider thread safety and side
effects, as multiple threads can concurrently access and modify data.
⦁ Parallel streams are not always efficient; testing and profiling are essential to
determine whether parallelism actually improves performance in your
specific use case.
39
List l = new List ( ); // invalid
List l = new ArrayList ( ) ; // valid
List l = new LinkedList ( ) ; // valid
1. ArrayList :-
-> Implementation class of List interface
-> Duplicate objects are allowed
-> Null values are accepted
-> Insertion order preserved
-> Internal data structure of ArrayList is growable array
-> Default Capacity is 10
-> homegeneuous & hetereogenious data supported
-> Not Synchronized
ArrayList Constructors
1) ArrayList al = new ArrayList ( ) ;
2) ArrayList al = new ArrayList (int capacity);
3) ArrayList al = new ArrayList (Collection c);
Methods of ArrayList
1) add (Object obj ) ---> Add object at end of the collection
2) add(int index, Object) --> Add object at given index
3) addAll (Collection c) ---> To add collection of objects at end of the collection
4) remove(Object obj) ---> To remove given object
5) remove(int index) ----> To remove object based on given index
6) get(int index) --> To get object based on index
7) contains(Object obj) ---> To check presense of the object
8) clear( ) ---> To remove all objects from collection
9) isEmpty ( ) ---> To check collection is empty or not
10) retainAll(Collection c) -->Keep only common elements and remove remaining
object
11) indexOf(Object obj) --> To get first occurence of given obj
12) lastIndexOf(Object obj) ---> To get last occurance of given object
13) set(int index, Object obj) ---> Replace the object based on given index
14) iterator ( ) --> Forward direction
15) listIterator ( ) --> Forward & back
40
1) ArrayList class is not recommended for insertions because it has to perform lot
of shiftings
2) ArrayList class is recommended for retriveal operations because it will retrieve
based on index directly
Insertion Operation -> Best case ( insert at end) & Worst case ( insert at i=0)
Deletion Operation -> Best case ( del. Last element) & Worst case ( del. i=0)
Searching Operation -> Best case ( got at i=0) & Worst case ( got at end)
2. LinkedList
-> Implementation of List interface
-> Internal data structure is double linked list
-> insertion order preserved
-> duplicate objects are allowed
-> null objects also allowed
-> homogenious & hetereogenious data we can store
-> Not Synchronized
3. Vector
-> It is same as ArrayList except it is synchronized.
-> Implementation class of List interface
-> Internal data structure is growable array
-> duplicates are allowed
-> Null Allowed
-> insertion order preserved
-> This is synchronized
-> Vector is called as legacy class ( jdk v 1.0)
-> To traverse vector we can use Enumeration as a cursor
-> Enumeration is called as Legacy Cursor (jdk 1.0v)
4. Stack
-> Implementation class of List interface
-> Extending from Vector class
-> Data Structure of Stack is LIFO (last in first out)
⦁ push ( ) ---> to insert object
⦁ peek ( ) ---> to get last element
⦁ pop ( ) ---> to remove last element
41
Note:-
1) ArrayList ---------> Growable Array
2) LinkedList ----------> Double Linked List
3) Vector -------------> Growable Array & Thread Safe
4) Stack -----------> L I F O
1) Iterator ----> forward direction ( List & Set )
2) ListIterator ---> forward & backward direction ( List impl classes )
3) Enumeration ----> forward direction & supports for legacy collection classes
B. Set :-
-> Set is a interface available in java.util package
-> Set interface extending from Collection interface
-> Set is used to store group of objects
-> Duplicate objects are not allowed
-> Null is allowed
-> Supports Homogenious & heterogenious
-> Insertion order will not be maintained
Set interface Implementation classes
1) HashSet
2) LinkedHashSet
3) TreeSet
1. HashSet
-> Implementation class of Set interface
-> Duplicate Objects are not allowed
-> Null is allowed
-> Insertion order will not be maintained
-> Initial Capacity is 16
-> Load Factor 0.75
-> Internal Datastructure is Hashtable
-> Not synchronized
Constructors
HashSet hs = new HashSet( );
HashSet hs = new HashSet(int capacity);
42
HashSet hs = new HashSet(int capacity, float loadFactor);
2. LinkedHasSet
-> Implementation class for Set interface
-> Duplicates are not allowed
-> Null is allowed
-> Insertion order will be preserved
-> Internal Data Structure is Hash table + Double linked list
-> Initial capacity 16
-> Load Factory 0.75
-> Not synchronized
Note: HashSet will not maintain insertion order where as LinkedHashSet will
maintain insertion order.
HashSet will follow Hastable data structure where as LinkedHashSet will follow
Hashtable + Double Linked List data structure.
3. TreeSet
-> Implementation class for Set interface
-> It will maintain Natural Sorting Order
-> Not follow the Insetion Order
-> Duplicates are not allowed
-> Null values are not allowed
-> Not synchronized
Note: When we add null value it will try to compare null value with previous object
then we will get NullPointerException.
-> It supports only homogeniuous data
Note : TreeSet should perform sorting so always it will compare newly added
object with old object. In order to compare the objects should be of same type
other wise we will get ClassCastException.
-> Internal data structure is binary tree.
C. Map
-> Map is an interface available in java.util package
-> Map is used to store the data in key-value format
-> One Key-Value pair is called as one Entry
-> One Map object can have multiple entries
43
-> In Map, keys should be unique and values can be duplicate
-> If we try to store duplicate keys in map then it will replace old key data with new
key data
-> We can take Key & Value as any type of data
-> Insertion Order not maintain
-> Map interface having several implementation classes
1) HashMap
2) LinkedHashMap
3) TreeMap
4) Hashtable
5) IdentityHashMap
6) WeakHashMap
Map methods
1) put (k,v) ---> To store one entry in map object
2) get(k) ---> To get value based on given key
3) remove(k) ---> To remove one entry based on given key
4) containsKey(k) ---> To check presense of given key
5) keySet ( ) ---> To get all keys of map
6) values ( ) ----> To get all values of the map
7) entryset ( ) --> To get all entries of map
8) clear ( ) --> To remove all the entries of map
9) isEmpty ( ) --> To check weather map obj is empty or not
10) size ( ) --> To get size of the map (how many entries avaiable)
1. HashMap
-> It is impl class for Map interface
-> Used to store data in key-value format
-> Default capacity is 16
-> Load factor 0.75
-> Underlying datastructure is hashtable
-> Insertion Order will not be maintained by HashMap
-> Not synchronized
2. LinkedHashMap
-> Implementation class for Map interface
-> Maintains insertion order
44
-> Data structure is hashtable + double linkedlist
3. TreeMap
-> Implementation class for Map interface
-> It maintains natural sorted order for keys
-> Internal Data structure for Tree map is binary tree
Hashtable
-> It is implementation class for Map interface
-> Default capacity is 11
-> Load factor 0.75
-> key-value format to store the data
-> Hashtable is legacy class (jdk 1.0 v)
-> Hashtable is synchronized
-> Does not allow duplicate keys but values can be.
-> Does not allowed null key or value.
-> If thread safety is not required then use HashMap instead of Hastable.
-> If thread safety is important then go for ConcurrentHashMap instead of
Hashtable.
D. Queue
-> It is extending properties from Collection interface
-> It is used to store group of objects
-> Internal Data structure is FIFO (First in First out)
-> It is ordered list of objects
-> insertion will happen at end of the collection
-> Removal will happen at beginning of the collection
47. What is the contract between hashCode ( ) & equals ( ) methods
1. If two objects are equal, their hashCode() must be the same:
⦁ If obj1.equals(obj2) returns true, then obj1.hashCode() must be equal to
obj2.hashCode().
⦁ This ensures that when two objects are equal, they are stored in the same
bucket in hash-based collections.
2. If two objects have the same hashCode(), they may or may not be equal:
⦁ Just because two objects have the same hash code does not mean they are
equal. This is known as a hash collision. The equals() method must be used to
45
determine actual equality
48. How HashMap works internally ?
The HashMap is a HashTable based implementation. It internally maintains an
array,also called as “bucket array”.
The size of the bucket array is determined by the initial capacity of the HashMap,
like the default is 16(0-15).
Each index position in the array is a bucket that can hold multiple Node objects
using a LinkedList.
But when entries in a single bucket reach a threshold (TREEIFY_THRESHOLD,
default value 8) then Map converts the bucket’s internal structure from the
linked list to a RedBlackTree (JEP 180). All Entry instances are converted to
TreeNode instances. So pessimistic O(n) performance Converted to O(log n).
and when nodes in a bucket reduce less than UNTREEIFY_THRESHOLD the Tree
again converts to LinkedList. This helps balance performance with memory usage
because TreeNodes takes more memory than Map.Entry instances.
So Map uses Tree only when there is a considerable performance gain in
exchange for memory wastage.
When we insert a key-value pair into a HashMap, the key's hashCode() method is
46
called to generate a hash value.
To improve hash code distribution and reduce collisions, HashMap applies a hash
spreading function:
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
⦁ key.hashCode() : retrieves the hash code of the key.
⦁ h >>> 16 : shifts the higher 16 bits of the hash code to the lower 16 bits.
⦁ XOR (^) : combines the original hash code and the shifted value to mix the
bits.
This ensures a more uniform distribution of hash values, reducing collisions .
After obtaining the transformed hash value, HashMap calculates the index where
the entry will be stored.
index = hash & (n - 1);
⦁ hash : The transformed hash value.
⦁ n : The size of the array (must be a power of 2, like 16, 32, etc.).
⦁ & (bitwise AND) : Ensures the index is always within the array bounds (0 to
n-1).
If two keys have the same index value, in this case ,equal() method check that both
keys are equal or not. if key are same, replace the value with current value.
Otherwise, connect this node object to the existing node object through the
LinkedList, Hence both Keys will be stored at same index.
47
compares the keys using the equals() method:
⦁ If equals() returns true, it means the key already exists in the map, and
the value is updated (replaced) with the new one.
⦁ If equals() returns false, it means the keys are different, and the new key-
value pair is added to the same bucket as part of a linked list (or tree in
case of many collisions, starting from Java 8).
48
52. What is the difference between Collection, Collections & Collections
Framework ?
Collection :- Collection is a container to store group of objects. We have an
interface with a name Collection (java.util). It is root interface in Collections
framework.
Collections :- Collections is a class available in java.util package
(Providing ready made methods to perform operations on objects)
Collections framework :- Collection interface & Collections class are part of
Collections framework. Along with these 2 classes there are several other classes
and interfaces in Collections framework.
d. Use-Case:
Fail-fast collections generally un situations me use hote hain jahan data
consistency aur correctness zaroori ho. Agar koi modification detect hota hai,
to exception throw karke process ko rok diya jata hai.
2. Fail-Safe Collections:
a. Definition:
Fail-safe collections wo hain jo concurrent modification detect nahi karte
hain aur safe traversal allow karte hain. Yeh collections ek snapshot copy
banate hain original collection ka aur us copy ko traverse karte hain.
b. Examples:
CopyOnWriteArrayList
ConcurrentHashMap
ConcurrentSkipListSet
c. Working:
Fail-safe collections internally ek copy banate hain original collection ka jab
iterator create hota hai. Isliye, agar original collection modify ho raha ho, to
bhi iterator apne snapshot ko traverse karta hai aur koi exception nahi throw
hoti.
CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>();
list.add("A");
list.add("B");
50
System.out.println(iterator.next());
}
d. Use-Case:
Fail-safe collections generally concurrent programming me use hote hain
jahan multiple threads ek hi collection pe kaam kar rahe hote hain. Yeh
collections thread-safe bhi hote hain aur ensure karte hain ki traversal aur
modification safely ho.
55. What is the difference between HashMap and WeakHashMap ?
=> HashMap keys will have strong reference that means they will maintain a
reference hence they are not elgible for Garbage Collector
=> WeakHashMap keys will have weak reference that means they are eligible for
Garbage Collection.
# Creation of Iterator:
51
Iterator it = c.iterator();
here iterator() method internally creates and returns an object of a class which
implements Iterator interface.
# Methods
1. boolean hasNext()
2. Object next()
3. void remove()
57.2. ListIterator
-> This cursor is used to access the elements of Collection in both forward and
backward directions
-> This cursor can be applied only for List category Collections
-> While accessing the methods we can also add,set,delete elements
-> ListIterator is interface and we can not create object directly.
-> If we want to create an object for ListIterator we have to use listIterator()
method
# Creation of ListIterator:
ListIterator<E> it = l.listIterator();
Here listIterator() method internally creates and returns an object of a class which
implements ListIterator interface.
# Methods
1. boolean hasNext();
2. Object next();
3. boolean hasPrevious();
4. Object previous();
5. int nextIndex();
6. int previousIndex();
7. void remove();
8. void set(Object obj);
9. void add(Object obj);
57.3. Enumeration
-> this cursor is used to access the elements of Collection only in forward direction
52
-> this is legacy cursor can be applied only for legacy classes like
Vector,Stack,Hashtable.
-> Enumeration is also an interface and we can not create object directly.
-> If we want to create an object for Enumeration we have to use a legacy method
called elements() method
# Creation of Enumeration:
Enumeration e = v.elements();
Here elements() method internally creates and returns an object of a class which
implements Enumeration interface.
# Methods
1. boolean hasMoreElements()
2. Object nextElement();
54
⦁ Insertion/removal at middle: O(n)
⦁ Use Case: When thread safety is required and legacy code is being
maintained.
⦁ Pros: Synchronized.
⦁ Cons: Slower than ArrayList due to synchronization overhead.
2. Set Interface
⦁ Implementations: HashSet, LinkedHashSet, TreeSet
a. HashSet
⦁ Performance:
⦁ Basic operations (add, remove, contains): O(1)
⦁ Use Case: When high-performance set operations are needed, without
requiring order.
⦁ Pros: Fast operations.
⦁ Cons: No ordering.
b. LinkedHashSet
⦁ Performance:
⦁ Basic operations: O(1)
⦁ Use Case: When iteration order needs to be predictable.
⦁ Pros: Maintains insertion order.
⦁ Cons: Slightly slower than HashSet due to maintaining a linked list.
c. TreeSet
⦁ Performance:
⦁ Basic operations: O(log n)
⦁ Use Case: When a sorted set is required.
⦁ Pros: Sorted order.
⦁ Cons: Slower than HashSet/LinkedHashSet.
3. Queue Interface
⦁ Implementations: LinkedList, PriorityQueue, ArrayDeque
a. LinkedList (as Queue)
⦁ Performance:
⦁ Offer, poll: O(1)
55
⦁ Use Case: When you need a simple FIFO queue.
⦁ Pros: Simple implementation.
⦁ Cons: Higher memory usage due to node storage.
b. PriorityQueue
⦁ Performance:
⦁ Offer, poll: O(log n)
⦁ Use Case: When you need elements sorted by priority.
⦁ Pros: Efficient priority handling.
⦁ Cons: No fixed size.
c. ArrayDeque
⦁ Performance:
⦁ Offer, poll: O(1) (amortized)
⦁ Use Case: When you need a double-ended queue with efficient
operations.
⦁ Pros: Efficient for both ends.
⦁ Cons: No random access.
4. Map Interface
⦁ Implementations: HashMap, LinkedHashMap, TreeMap
a. HashMap
⦁ Performance:
⦁ Basic operations (get, put): O(1)
⦁ Use Case: When you need fast access by key.
⦁ Pros: Fast operations.
⦁ Cons: No ordering.
b. LinkedHashMap
⦁ Performance:
⦁ Basic operations: O(1)
⦁ Use Case: When you need access order or insertion order iteration.
⦁ Pros: Maintains order.
⦁ Cons: Slightly slower than HashMap.
c. TreeMap
56
⦁ Performance:
⦁ Basic operations: O(log n)
⦁ Use Case: When you need a sorted map.
⦁ Pros: Sorted order.
⦁ Cons: Slower than HashMap.
57
underlying array.
⦁ CyclicBarrier: A synchronization aid that allows a set of threads to all wait for
each other to reach a common barrier point.
58
⦁ Phaser: A flexible barrier that is useful for implementing multi-phase
computations.
59
⦁ Future: Represents the result of an asynchronous computation.
60
memory for an object or array but is unable to do so because the available
memory has been exhausted.
Causes of OutOfMemoryError:
Memory Leaks:
⦁ If objects are no longer needed but still referenced, they won’t be garbage
collected, causing a gradual increase in memory usage.
Large Object Creation:
⦁ Trying to load large data sets or create excessively large objects/arrays can
cause an OutOfMemoryError if the heap space is too small.
Improper JVM Settings:
⦁ If the heap size or other memory settings are configured incorrectly, it might
lead to memory exhaustion.
Too Many Threads:
⦁ Each thread consumes memory (in stack space and other overhead), and if
there are too many threads, the memory may run out.
How to Resolve OutOfMemoryError:
⦁ Increase Heap Size:
⦁ Fix Memory Leaks:
⦁ Optimize Object Creation:
⦁ Use Weak References:
⦁ Optimize Threads:
62
}
@Override
protected Object clone() throws CloneNotSupportedException {
return super.clone(); // Deep copy of Address
}
}
class Person implements Cloneable {
String name;
Address address;
Person(String name, Address address) {
this.name = name;
this.address = address;
}
// Deep copy method
@Override
protected Object clone() throws CloneNotSupportedException {
Person cloned = (Person) super.clone();
cloned.address = (Address) address.clone(); // Deep copy of
the Address object
return cloned;
}
}
public class DeepCloningExample {
public static void main(String[] args) throws
CloneNotSupportedException {
Address address = new Address("New York");
Person person1 = new Person("John", address);
// Deep cloning
Person person2 = (Person) person1.clone();
// Both person1 and person2 have different address objects
System.out.println(person1.address.city); // Output: New York
System.out.println(person2.address.city); // Output: New York
// Change city in person2's address
person2.address.city = "Los Angeles";
// person1 and person2 are independent of each other
System.out.println(person1.address.city); // Output: New York
System.out.println(person2.address.city); // Output: Los
Angeles
}
}
63
63.1. Serialization :-
⦁ Serialization is the process of converting an object’s state to a byte stream.
This byte stream can then be saved to a file, sent over a network, or stored in
a database. The byte stream represents the object’s state, which can later be
reconstructed to create a new copy of the object.
⦁ Serialization allows us to save the data associated with an object and
recreate the object in a new location.
⦁ The ObjectOutputStream class contains writeObject() method for serializing
an Object.
# Serialization Formats :-
⦁ Many different formats can be used for serialization, such as JSON, XML, and
binary. JSON and XML are popular formats for serialization because they are
human-readable and can be easily parsed by other systems. Binary formats
are often used for performance reasons, as they’re typically faster to read
and write than text-based formats.
63.2. Deserialization :-
Deserialization is the reverse process of serialization. It involves taking a byte
stream and converting it back into an object. This is done using the appropriate
tools to parse the byte stream and create a new object.
64
deserialize a binary format, and the Jackson library can be used to parse a JSON
format.
65
In the above image example, before serialization, Account object can provide
proper username and password but deserialization of Account object provides only
username and not the password. This is due to declaring password variable as
transient.
Note: While performing object serialization, we have to define the above two
methods in that class.
66
66. What is Serialization and Externalization ?
1. A serializable interface is used to implement serialization. An externalizable
interface used to implement Externalization.
2. Serializable is a marker interface i.e. it does not contain any method. The
externalizable interface is not a marker interface and thus it defines two
methods writeExternal() and readExternal().
4. Using a serializable interface we save the total object to a file, and it is not
possible to save part of the object. Base on our requirements we can save
either the total object or part of the object.
67
converts the object into a series of bytes. Later, when you want to read
(deserialize) that object back, Java needs to ensure that the class definition hasn't
changed. The serialVersionUID helps with this check.
If the serialVersionUID matches during deserialization, Java knows that the class
is compatible, and the object can be safely deserialized.
If the serialVersionUID doesn't match, Java throws an InvalidClassException,
indicating that the class has changed in a way that makes the object incompatible
with its previous version.
Declaration: You can declare your own serialVersionUID in a class like this:
private static final long serialVersionUID = 12345L;
This guarantees that a read operation always sees the most recent write operation
by any thread.
68
71. What are Generics in java ?
⦁ Using Generics, we can write our classes / variable / methods which are
independent of data type
⦁ Generics are used to achieve type safety
⦁ Note: Before Generics was introduced, generalized classes, interfaces or
methods were created using references of type Object because Object is the
super class of all classes in Java, but this way of programming did not ensure
type safety
⦁ Note: This is also known as Diamond Notation of creating an object of
Generic type.
-> In other languages, such as C or C++, the programmer is solely responsible for
creating and deleting objects. This may result in memory depletion if the
programmer forgets to dereference the objects.
-> In Java, programmers do not have to work on this. The JVM automatically
destroys objects which have lost their reference.
The pointer to the value “Ashok” is now nullified. I cannot access the value
anymore as there was only one reference pointing to it. This is unreachability of
objects in memory.
This object of human named “Ashok” is now eligible for garbage collection.
An object is eligible for garbage collection if there are no references to it in the
heap memory.
There are a few ways to make an object eligible for garbage collection. They are:
⦁ You can nullify the reference variable.
⦁ You can assign the same pointer to a different object.
⦁ All objects created inside a method lose their reference outside the method
and are thus eligible for garbage collection.
⦁ Using Island of Isolation.
This method generally contains actions which the JVM performs just before the
object gets deleted.
The Object class contains the finalize() method. Remember to override the finalize
70
method in the class whose objects will be garbage collected.
⦁ When GC visits an object, it marks it as accessible and thus alive. Every object
the garbage collector visits is marked as alive. All the objects which are not
reachable from GC Roots are garbage and considered as candidates for
garbage collection.
⦁ Memory can be compacted after the garbage collector deletes the dead
objects, so that the remaining objects are in a contiguous block at the start of
71
the heap.
But if we call start() method thread will be registered with thread scheduler and it
calls run() method.
class MyThread implements Runnable {
public void run() {
Thread t = Thread.currentThread();
for (int i = 1; i <= 5; i++) {
System.out.println(t.getName() + "Thread
Value" + i);
}
}
public static void main(String args[]) {
MyThread mt = new MyThread();
Thread t = new Thread(mt);
t.start();
//t.run();
}
}
Note: When we call start ( ) method it is creating thread and printing output
72
like below
Thread-0 Thread Value:1
Thread-0 Thread Value:2
Thread-0 Thread Value:3
Thread-0 Thread Value:4
Thread-0 Thread Value:5
Note: When we call run ( ) method it is not creating thread and printing
output like below with main thread
main Thread Value:1
main Thread Value:2
main Thread Value:3
main Thread Value:4
main Thread Value:5
83. Deadlock
When we execute multiple Threads which are acting on same object that is
synchronized at the same time simultaneously then there is another problem may
occur called deadlock.
Dead lock may occur if one thread holding resource1 and waiting for resource2
release by the Thread2, at the same time Thread2 is holding on resource2 and
waiting for the resource1 released by the Thread1 in this case 2 Threads are
continuously waiting and no thread will execute this situation is called as deadlock.
To resolve this deadlock situation there is no any concept in java, programmer only
responsible for writing the proper logic to resolve the problem of deadlock.
73
A race condition occurs when two or more threads access shared resources
concurrently, and the final outcome depends on the timing or order of their
execution. This leads to unpredictable and inconsistent behavior.
2. notify(): This method used to send the notification to one of the waiting thread
so that thread enter into running state and execute the remaining task.
3. notifyAll(): This method used to send the notification to all the waiting threads
so that all thread enter into running state and execute simultaneously.
-> All these 3 methods are available in Object class which is super most class so
that we can access all these 3 methods in any class directly without any reference.
-> These methods are mainly used to perform Inner thread communication
When we call start ( ) method then Thread Schedular will start its operation.
1) Allocating Resources
2) Thread Scheduling
3) Thread Execution by calling run ( ) method
Runnable : After calling start ( ) method, thread comes from new state to runnable
state.
Running : A thread comes to running state when Thread Schedular will pick up that
thread for execution.
Blocked : A thread is in waiting state if it waits for another thread to complete its
task.
74
Terminated : A thread enters into terminated state once it completes its task.
yield ( ) :- yield ( ) method is used to give chance for other equal priority threads to
execute.
75
projects like Spring Boot and Spring MVC.
76
B. Isolation in @Transactional
Isolation defines how transactions interact with each other, mainly dealing with
concurrency issues.
This PageRequest is then passed to the repository method. Spring Data JPA
handles the pagination logic automatically, returning a Page object that contains
the requested page of data along with useful information like total pages and total
elements. This approach allows me to efficiently manage large datasets by
retrieving only a subset of data at a time.
78
tables.
JPA: Java Persistence API, Is a Specification which provide standard API to persist
Java objects into relational databases.
In teeno ko use karke aap easily apne Java objects ko database mein store,
retrieve, update aur delete kar sakte hain bina manually SQL queries likhe.
104. What are the differences between get() and load() methods in
Hibernate?
The get() method in Hibernate retrieves the object if it exists in the database;
otherwise, it returns null. The load() method also retrieves the object, but if it
doesn’t exist, it throws an ObjectNotFoundException. load() can use a proxy to
79
fetch the data lazily.
Drawback:
1. In high concurrency scenarios, frequent retries due to version number
updates can lead to significant performance degradation.
Drawback:
81
1. Can lead to reduced performance due to blocking or waiting.
82
⦁ It remains in the session but is scheduled for removal.
83
public static void main(String[] args) {
1.agar aap MySQL use kar rahe hain, to aapko Hibernate ko MySQLDialect
batana hoga: spring.jpa.properties.hibernate.dialect =
org.hibernate.dialect.MySQL5Dialect
2.Isi tarah agar aap PostgreSQL use kar rahe hain, to aapko PostgreSQLDialect
specify karna hoga: spring.jpa.properties.hibernate.dialect =
org.hibernate.dialect.PostgreSQLDialect
84
⦁ When you make a PATCH request, you send only the parts of the resource
that you want to update, rather than the entire representation.
⦁ The server applies the partial update to the resource, modifying only the
specified fields or properties.
⦁ PATCH is useful when you want to make small changes to a resource without
having to send the entire representation, which can be more efficient in
some cases.
In summary, PUT is used for full updates, while PATCH is used for partial updates.
The choice between PUT and PATCH depends on the specific use case and the
desired behavior for updating the resource.
PUT: Replaces a resource. Multiple identical requests will result in the resource
being updated to the same state.
DELETE: Removes a resource. Multiple identical requests will result in the resource
being deleted (if it exists), and subsequent requests will have no additional effect.
HEAD: Similar to GET but without the response body. Multiple identical requests
will yield the same metadata.
OPTIONS: Returns the supported HTTP methods. Multiple identical requests will
result in the same response.
POST: Typically used to create a resource. Multiple identical POST requests can
result in multiple resources being created, which means the outcome can change
with each request.
85
⦁ Idempotency is crucial for building fault-tolerant APIs.
⦁ To prevent issues like duplicate payments due to network failures or
timeouts, you can make POST requests idempotent by using an
Idempotency-Key.
⦁ The server checks if the Idempotency-Key exists in the request headers:
⦁ If the key is found, the server returns the cached response, avoiding
duplicate processing.
⦁ If not, the server processes the request and stores the response associated
with that key.
# Here are a few rules to help you decide which indexes to create:
86
⦁ If your record retrievals are based on one field at a time (for example,
dept='D101'), create an index on these fields.
⦁ If your record retrievals are based on a combination of fields, look at the
combinations.
⦁ If the comparison operator for the conditions is AND (for example, CITY =
'Raleigh' AND STATE = 'NC'), then build a concatenated index on the CITY
and STATE fields. This index is also useful for retrieving records based on
the CITY field.
⦁ If the comparison operator is OR (for example, DEPT = 'D101' OR
HIRE_DATE > {01/30/89}), an index does not help performance.
Therefore, you need not create one.
⦁ If the retrieval conditions contain both AND and OR comparison
operators, you can use an index if the OR conditions are grouped. For
example:
dept = 'D101' AND (hire_date > {01/30/89} OR exempt = 1)
⦁ In this case, an index on the DEPT field improves performance.
⦁ If the AND conditions are grouped, an index does not improve performance.
For example:
(dept = 'D101' AND hire_date) > {01/30/89}) OR exempt = 1
In this example, the DEPT and EMP database tables are being joined using the
department ID field. When the driver executes a query that contains a join, it
processes the tables from left to right and uses an index on the second table's join
field (the DEPT field of the EMP table).
87
To improve join performance, you need an index on the join field of the second
table in the From clause.
If there is a third table in the From clause, the driver also uses an index on the
field in the third table that joins it to any previous table. For example:
SELECT * FROM dept, emp, addr WHERE dept.dept_id = emp.dept AND
emp.loc = addr.loc
In this case, you should have an index on the EMP.DEPT field and the ADDR.LOC
field.
# Example:-
CREATE TABLE Customer (
CustomerID int PRIMARY KEY,
Name varchar(255),
Address varchar(255),
Email varchar(255)
);
Suppose you frequently search for customers by email. Without an index, the
database would have to scan every row to find the customer. You can create an
index on the Email column to speed up these searches:
CREATE INDEX idx_customer_email ON Customer (Email);
The database will use the idx_customer_email index to quickly locate the row(s)
that match the email, significantly speeding up the query.
⦁ DataSource: Responsible for setting up the connection to the database with the
88
necessary configuration details.
⦁ EntityManagerFactory: Manages the JPA entities and provides EntityManager
instances to interact with the persistence context.
⦁ TransactionManager: Manages transactions to ensure data consistency and
integrity.
2. @EmbeddedId: Marks the field in the entity class that represents the
composite key.
@Entity
@Table(name = "your_table_name")
public class YourEntity {
@EmbeddedId
private CompositeKey id;
private String someOtherField;
}
120. How to write native query and custom query in spring data jpa.
In Spring Data JPA, you can write native queries and custom queries using the
@Query annotation.
Native Queries :- Native queries allow you to write raw SQL queries directly, giving
89
you more control over the database operations.
@Query(value = "SELECT * FROM Customer WHERE email LIKE %:domain",
nativeQuery = true)
List<Customer> findByEmailDomain(@Param("domain") String domain);
Custom Queries :- Custom queries are written using JPQL (Java Persistence Query
Language), which is similar to SQL but operates on the entity objects rather than
database tables.
@Query("SELECT c FROM Customer c WHERE c.lastName = :lastName")
List<Customer> findByLastName(@Param("lastName") String lastName);
4. Truncate: Clears all data from a table while retaining its structure, often
faster than DELETE.
5. Rename: Used to change the name of database objects like tables, columns,
or indexes.
DDL commands are used to define or modify the structure of the database and are
primarily used for schema changes.
91
or right table. Non-matching rows in either table are filled with NULL.
5. CROSS JOIN : Produces a Cartesian product, combining all rows from both
tables.
6. SELF JOIN : Joins a table to itself, useful for hierarchical or relationship-based
queries.
92
2. 2NF (Second Normal Form)
A table is in 2NF if:
1. It is in 1NF.
2. All non-key attributes are fully dependent on the primary key.
94
129. What is ACID Properties ?
1. Atomicity : Ensures the transaction is all-or-nothing.
2. Consistency : Ensures the database remains in a valid state before and after a
transaction.
3. Isolation : Ensures that transactions are executed independently, even if they
run concurrently.
4. Durability : Ensures that committed transactions are permanent, even in the
event of system failure.
1. Ranking Functions
⦁ ROW_NUMBER()
⦁ RANK()
⦁ DENSE_RANK()
⦁ NTILE(n)
2. Aggregate Functions as Window Functions
⦁ SUM()
⦁ AVG()
⦁ COUNT()
⦁ MIN()
⦁ MAX()
3. Value Functions
⦁ LAG()
⦁ LEAD()
⦁ FIRST_VALUE()
⦁ LAST_VALUE()
95
Syntax
SELECT column_name,
window_function() OVER (
PARTITION BY partition_column
ORDER BY order_column
) AS alias
FROM table_name;
97
# How IoC Container Works (Bean Life Cycle) :-
1. Initialization: When the application starts, the Spring IoC container initializes.
2. Bean Creation: The container creates the beans.
3. Dependency Injection: The container injects the dependencies of the beans.
4. Bean Management: The container manages the beans and controls their
lifecycle.
5. Bean Post-Processing (Optional): If the container has any
BeanPostProcessors defined, they are applied to the bean before and after
initialization. @PostConstruct or a custom init-method are executed here.
6. Bean Ready for Use: After all the previous steps, the bean is fully initialized
and ready for use by the application.
7. Destruction: When the Spring container is destroyed (for example, during
application shutdown), it calls the bean's destruction callback (e.g.,
@PreDestroy or destroy-method).
98
within the application.
Yeh scopes humein flexibility dete hain ki hum apni application ke needs ke hisaab
se beans ko configure kar sakein.
1. Singleton (Default)
Singleton Spring ka default scope hai. Is scope mein, container ek hi instance
create karta hai bean ka, aur woh instance application context ke saath rehta
hai.
2. Prototype
@Scope("prototype")
Prototype scope mein, container har baar ek new instance create karta hai
jab bhi bean ko request kiya jata hai.
3. Request
@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode =
ScopedProxyMode.TARGET_CLASS)
Request scope Spring MVC applications ke liye useful hai. Is scope mein, ek
HTTP request ke lifecycle ke dauran bean ka ek hi instance create hota hai.
4. Session
@Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode =
ScopedProxyMode.TARGET_CLASS)
Session scope mein, bean ka ek instance ek HTTP session ke dauran create
hota hai aur us session ke end hone tak rehta hai.
5. GlobalSession
@Scope(value = WebApplicationContext.SCOPE_GLOBAL_SESSION,
proxyMode = ScopedProxyMode.TARGET_CLASS)
GlobalSession scope portlet-based web applications mein use hota hai. Yeh
scope ek global HTTP session ke liye bean ka ek instance create karta hai.
6. Application
@Scope(value = WebApplicationContext.SCOPE_APPLICATION)
99
Application scope mein, bean ka ek instance ServletContext ke dauran create
hota hai aur application ke lifecycle ke dauran rehta hai.
Proxies:
Lifecycle Management: The proxy helps manage the lifecycle of the scoped bean
correctly. It ensures that each session gets its own instance, even when the bean is
injected into a singleton or prototype bean.
Boot Actuator is a powerful tool for monitoring and managing your Spring Boot
applications in production, providing crucial insights into application health and
performance.
101
137. Differance between URI and URL .
⦁ URI = Identifies a resource uniquely (like a name, ID, or ISBN).
⦁ URL = Specifies where and how to access the resource (like an address or
website link).
Example:
1. Website Domain vs. Web Page Link
⦁ URI (General Identification): https://example.com is a URI because it
identifies a website.
⦁ URL (Locator + Access Method): https://example.com/products/shoes?
color=red&size=9 is a URL because it provides the exact location and access
details for a specific product.
2. Email Address vs. Webmail Link
⦁ URI (Identifier Only): Your email address is an identifier but doesn’t specify
how to access it. Example: mailto:johndoe@example.com
⦁ URL (Locator + Method): A webmail link that directs you to an interface
where you can read/send emails. Example:
https://mail.google.com/mail/u/0/#inbox
138. @RequestBody
102
In Spring Boot, the @RequestBody annotation is used to bind the request body
(the data sent by the client) to a method parameter in a controller method. It is
typically used in RESTful APIs to receive data as JSON or XML payloads in POST,
PUT, PATCH, or DELETE requests.
Receiving JSON Data: When a client sends a JSON object in a request body, you
can use @RequestBody to map this JSON object to a Java object.
Automatic Conversion: Spring automatically converts the JSON data to a Java
object using a message converter (usually Jackson).
139. Difference between @Service and @Component
⦁ @EnableAutoConfiguration(exclude =
{ DataSourceAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class })
⦁ spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jd
bc.DataSourceAutoConfiguration,org.springframework.boot.autoconfigure
.data.jpa.JpaRepositoriesAutoConfiguration
103
⦁ @SpringBootApplication(exclude = { AutoConfiguration.class })
For example, if we are auto-configuring a data source but want to back off when a
data source bean is manually defined, we annotate the auto-configuration method
with @ConditionalOnMissingBean(DataSource.class). This ensures our custom
configuration takes precedence, and Spring Boot's auto-configuration will not
interfere if the bean is already defined.
143. How to create Custome Annotation in SpringBoot ?
104
// Custom annotation for class level
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface ClassLevelAnnotation {
String value() default "";
}
@Before("@within(ClassLevelAnnotation) ||
@annotation(ClassLevelAnnotation)")
public void handleClassLevelAnnotation() {
System.out.println("Class level annotation is present");
}
@Before("@annotation(MethodLevelAnnotation)")
public void handleMethodLevelAnnotation() {
105
System.out.println("Method level annotation is
present");
}
i) application.properties file
my.property=someValue
my.anotherProperty=42
106
@Configuration
@ConfigurationProperties(prefix = "my")
@Data
public class MyProperties {
private String property;
private int anotherProperty;
}
2. @Value Annotation :-
This method is straightforward and often used for injecting single property values.
It is simpler for basic needs but less preferred for more complex configurations.
# Access the properties in your components:
@Component
public class MyComponent {
@Value("${my.property}")
107
private String myProperty;
@Value("${my.anotherProperty}")
private int anotherProperty;
public void printProperty() {
System.out.println("Property value: " +
myProperty);
System.out.println("Another property value: " +
anotherProperty);
}
}
# Step-by-Step Implementation
CreditCardPayment.java
@Service
public class CreditCardPayment implements PaymentService {
@Override
public void processPayment(double amount) {
System.out.println("Processing credit card payment of " +
amount);
}
}
PayPalPayment.java
@Service
public class PayPalPayment implements PaymentService {
@Override
108
public void processPayment(double amount) {
System.out.println("Processing PayPal payment of " +
amount);
}
}
109
149. What is Auto-wiring?
Autowiring in Spring is the process by which Spring automatically injects the
dependencies of objects into one another. It eliminates the need for manual bean
wiring and makes the code cleaner and easier to maintain.
Spring Boot automatically configures the new server as the embedded server for
our application. This flexibility allows us to choose the server that best fits our
needs without significant changes to our application, making Spring Boot
adaptable to various deployment environments and requirements.
111
them as beans in the ApplicationContext.
112
@Size, etc., and using a Validator implementation, Spring can automatically ensure
that model attributes adhere to defined rules before processing them.
166. How to bind the form data to Model Object in Spring MVC.
Form data is bound to model objects in Spring MVC using @ModelAttribute
annotation. This automatically populates a model object with request parameters
matching the object's field names.
For example, if there are two beans of type DataSource, we can give each a name
and use @Qualifier("beanName") to tell Spring which one to use.
Another way is to use @Primary on one of the beans, marking it as the default
choice when injecting that type.
113
microservices architecture using Spring Boot?
For simple, direct communication, I would use RestTemplate, which allows
services to send requests and receive responses like a two-way conversation.
For more complex interactions, especially when dealing with multiple services, I
would choose Feign Client. Feign Client simplifies declaring and making web
service clients, making the code cleaner and the process more efficient.
170. Discuss how you would add a GraphQL API to an existing Spring
Boot RESTful service.
First, I'd add GraphQL Java and GraphQL Spring Boot starter dependencies to my
pom.xml or build.gradle file. Secondly, I'd create a GraphQL schema file
(schema.graphqls) in the src/main/resources folder.
Then I'd data fetchers implement them to retrieve data from the existing services
or directly from the database and moving ahead, I'd configure a GraphQL service
using the schema and data fetchers
Then I would expose the graphql endpoint and make sure it is correctly
configured. Finally, I'd test the GraphQL API using tools like GraphiQL or Postman
to make sure it's working as expected.
171. Imagine Your application requires data from an external REST API
to function. Describe how you would use RestTemplate or WebClient
to consume the REST API in your Spring Boot application.
Talking about RestTemplate: First, I would define a RestTemplate bean in a
configuration class using @Bean annotation so it can be auto-injected anywhere I
need it. Then, I'd use RestTemplate to make HTTP calls by creating an instance and
using methods like getForObject() for a GET request, providing the URL of the
114
external API and the class type for the response.
Talking about WebClient : I would define a WebClient bean similarly using @Bean
annotation. Then I would use this WebClient to make asynchronous requests,
calling methods like get(), specifying the URL, and then using retrieve() to fetch the
response. I would also handle the data using methods like bodyToMono() or
bodyToFlux() depending on if I am expecting a single object or a list.
172. How you would use Spring WebFlux to consume data from an
external service in a non-blocking manner and process this data
reactively within your Spring Boot application.
In a Spring Boot app using Spring WebFlux, I'd use WebClient to fetch data from
an external service without slowing things down. WebClient makes it easy to get
data in a way that doesn't stop other parts of the app from working.
When the data comes in, it's handled reactively, meaning I can work with it on the
go like filtering or changing it without waiting for everything to finish loading. This
keeps the app fast and responsive, even when dealing with a lot of data or making
many requests.
• Next, I would mark methods I want to run asynchronously with the @Async
annotation. These methods can return void or a Future type if I want to track the
result.
• Finally, I would call these methods like any other method. Spring takes care of
running them in separate threads, allowing the calling thread to proceed without
waiting for the task to finish.
Remember, for the @Async annotation to be effective, the method calls must be
made from outside the class. If I call an asynchronous method from within the
115
same class, it won't execute asynchronously due to the way Spring proxying works.
175. How would you implement efficient handling of large file uploads
in a Spring Boot REST API, ensuring that the system remains responsive
and scalable?
To handle big file uploads in a Spring Boot REST API without slowing down the
system, I'd use a method that processes files in the background and streams them
directly where they need to go, like a hard drive or the cloud.
This way, the main part of the app stays fast and can handle more users or tasks at
the same time.
Also, by saving files outside the main server, like on Amazon S3, it helps the app
run smoothly even as it grows or when lots of users are uploading files.
Then, we create a method that returns our error page or message for 404 errors,
and we map this method to the /error URL using @RequestMapping.
In this method, we can check the error type and customize what users see when
they hit a page that doesn't exist. This way, we can make the error message or
page nicer and more helpful.
177. How to get the list of all the beans in your spring boot application?
Step 1: First I would Autowire the ApplicationContext into the class where I want
116
to list the beans.
Cache expiration is when data is removed because it's too old, based on a
predetermined time-tolive.
So, eviction manages cache size, while expiration ensures data freshness.
179. If you had to scale a Spring Boot application to handle high traffic,
what strategies would you use?
To scale a Spring Boot application for high traffic, we can:
⦁ Add more app instances (horizontal scaling) and use a load balancer to
spread out the traffic.
⦁ Break our app into microservices so each part can be scaled independently.
⦁ Use cloud services that can automatically adjust resources based on our app's
needs.
⦁ Use caching to store frequently accessed data, reducing the need to fetch it
from the database every time.
⦁ Implement an API Gateway to handle requests and take care of things like
authentication
180. What strategies would you use to optimize the performance of a
Spring Boot application?
Let’s say my Spring Boot application is taking too long to respond to user requests.
I could:
117
⦁ Implement caching for frequently accessed data.
⦁ Optimize database queries to reduce the load on the database.
⦁ Use asynchronous methods for operations like sending emails.
⦁ Load Balancer if traffic is high
⦁ Optimize the time complexity of the code
⦁ Use webFlux to handle a large number of concurrent connections.
I would also analyze application logs and metrics to spot any patterns or errors,
especially under high load.
Then, I would start a performance tests to replicate the issue and use a profiler for
code-level analysis.
After getting findings, I might optimize the database, implement caching, or use
scaling options. It's also crucial to continuously monitor the application to prevent
future issues.
118
This setup ensures that my Spring Boot application is secure, managing both
authentication and authorization effectively.
Authorization decides what I'm allowed to do after I'm identified, like if I can
access certain parts of an app. It's about permissions.
It goes through each way to find one that can confirm the user’s details are valid.
This setup lets Spring Security handle different login methods, like checking against
a database or an online service, making sure the user is who they say they are.
185. What is the best practice for storing passwords in a Spring Security
application?
The best practice for storing passwords in a Spring Security application is to never
store plain text passwords. Instead, passwords should be hashed using a strong,
one-way hashing algorithm like bcrypt, which Spring Security supports.
Hashing converts the password into a unique, fixed-size string that cannot be
easily reversed.
Additionally, using a salt (a random value added to the password before hashing)
makes the hash even more secure by preventing attacks like rainbow table
119
lookups. This way, even if the password data is compromised, the actual
passwords remain protected.
This makes every user's password hash unique, even if the actual passwords are
the same. It helps stop attackers from guessing passwords using known hash lists.
When a password needs to be checked, it's combined with its salt again, hashed,
and then compared to the stored hash to see if the password is correct. This way,
the security of user passwords is greatly increased.
187. In your application, there are two types of users: ADMIN and
USER. Each type should have access to different sets of API endpoints.
Explain how you would configure Spring Security to enforce these
access controls based on the user's role.
In the application, to control who can access which API endpoints, I can use Spring
Security to set rules based on user roles. I can configure it so that only ADMIN
users can reach admin related endpoints and USER users can access user-related
endpoints.
This is done by defining patterns in the security settings, where I link certain URL
paths with specific roles, like making all paths starting with "/admin" accessible
only to users with the ADMIN role, and paths starting with "/user" accessible to
those with the USER role. This way, each type of user gets access to the right parts
of the application.
When the server gets this scrambled password, it compares it with its own
120
scrambled version. If they match, it means the user's identity is verified, and access
is granted. This method is more secure because the real password is never exposed
during the check.
189. How does Spring Security handle session management, and what
are the options for handling concurrent sessions Spring Security
handles session management by creating a session for the user upon
successful authentication.
For managing concurrent sessions, it provides options to control how many
sessions a user can have at once and what happens when the limit is exceeded.
For example, I can configure it to prevent new logins if the user already has an
active session or to end the oldest session. This is managed through the session
management settings in the Spring Security configuration, where I can set policies
like maximumSessions to limit the number of concurrent sessions per user.
190. Imagine you are designing a Spring Boot application that interfaces
with multiple external APIs. How would you handle API rate limits and
failures?
To handle API rate limits and failures in a Spring Boot application, I would
191. To protect your application from abuse and ensure fair usage, you
decide to implement rate limiting on your API endpoints. Describe a
121
simple approach to achieve this in Spring Boot.
To implement rate limiting in a Spring Boot application, a simple approach is to use
a library like Bucket4j or Spring Cloud Gateway with built-in rate-limiting
capabilities. By integrating one of these libraries, I can define policies directly on
my API endpoints to limit the number of requests a user can make in a given time
frame.
192. Explain Cross-Origin Resource Sharing (CORS) and how you would
configure it in a Spring Boot application.
Cross-Origin Resource Sharing allows a website to safely access resources from
another website. In Spring Boot, we can set up CORS by adding @CrossOrigin to
controllers or by configuring it globally.
This tells our application which other websites can use its resources, what type of
requests they can make, and what headers they can use.
This way, We control who can interact with our application, keeping it secure while
letting it communicate across different web domains.
Same-Origin Policy:-
Same-Origin Policy yeh ensure karta hai ki ek web page sirf un resources ko access
kar sakta hai jo uske apne domain, protocol aur port ke sath match karte hain. For
example, agar aapka web page https://example.com pe host hai, toh yeh sirf
https://example.com ke resources ko access kar sakta hai, na ki https://another-
domain.com ke.
122
website pe action perform karne ke liye trick karti hai, jahaan user already
authenticated hota hai. Example ke liye, agar user apne bank account mein logged
in hai, toh ek CSRF attack user ke browser ko bina user ke consent ke attacker ke
account mein paise transfer karne ke liye trick kar sakta hai.
CSRF protection ke liye aksar ek CSRF token use hota hai jo request ke sath bhejna
padta hai. Yeh token unique aur secret hota hai aur server isse verify karta hai
taaki request legitimate ho.
193. How can you use Spring Expression Language (SpEL) for fine
grained access control?
I can use Spring Expression Language (SpEL) for fine-grained access control by
applying it in annotations like @PreAuthorize in Spring Security.
With SpEL, I can create complex expressions to evaluate the user's context, such as
roles, permissions, and even specific method parameters, to decide access rights.
This allows for detailed control over who can access what in the application,
making the security checks more dynamic and tailored to the specific scenario,
ensuring that users only access resources and actions they are authorized for.
194. Explain the process of creating a Docker image for a Spring Boot
application.
To make a Docker image for a Spring Boot app, we start by writing a Dockerfile.
This file tells Docker how to build our app's image.
We mention which Java version to use, add our app's .jar file, and specify how to
run our app.
This command tells Docker to create the image with everything our app needs to
run. By doing this, we can easily run our Spring Boot app anywhere Docker is
available, making our app portable and easy to deploy
195. How to Deploy Spring Boot Web Applications as Jar and War Files?
123
To deploy Spring Boot web applications, we can package them as either JAR or
WAR files. For a JAR, we use Spring Boot's embedded server, like Tomcat, by
running the command mvn package and then java -jar target/myapplication.jar.
When we make changes to our code and push them, the pipeline automatically
builds the app, runs tests, and if everything looks good, deploys it. This uses tools
like Jenkins or GitHub Actions to automate tasks, such as compiling the code and
checking for errors.
If all tests pass, the app can be automatically sent to a test environment or directly
to users. This setup helps us quickly find and fix errors, improve the quality of our
app, and make updates faster without manual steps.
Additionally, with the @Profile annotation, I would selectively load certain beans
124
or configurations according to the current environment and ensuring that my
application adapts seamlessly to both development and production settings.
Dis-Advantages
1) Difficult to maintain
2) Dependencies among the functionalites
3) Single Point Of Failure
4) Entire Project Re-Deployment
125
3) Faster Development
4) Quick Deployment
5) Faster Releases
6) Less Downtime
7) Technology Independence (We can develop backend apis with multiple
technologies)
Dis-Advantages
1) Bounded Context (Deciding no.of services to be created)
2) Lot of configurations(in every microservices we have to write some common
configuration Ex: Datasource, SMTP, Kafka, Redis)
3) Visibility
200.
127
Spring Cloud is one of the components of the Spring framework, it helps manage
microservices.
Imagine we are running an online store application, like a virtual mall, where
different sections handle different tasks. In this app, each store or section is a
microservice. One section handles customer logins, another manages the shopping
cart, one takes care of processing payments, and the other lists all the products.
Building and managing such an app can be complex because we need all these
sections to work together seamlessly. Customers should be able to log in, add
items to their cart, pay for them, and browse products without any problems.
That’s where Spring Cloud comes into the picture. It helps microservices in
connecting the section, balancing the crowd, keeping the secret safe, etc.
Also, setting up timeouts helps avoid waiting too long for something that might
not work. Plus, keeping an eye on the system with good logging and monitoring
lets spot and fix issues fast. This approach keeps the app running smoothly, even
when some parts have trouble.
128
track of all service endpoints.
207. What is a Circuit Breaker in microservices?
A Circuit Breaker is a design pattern used in microservices to prevent a network or
service failure from cascading to other services. It monitors for failures and, once a
threshold is reached, it trips the circuit breaker, which prevents further failures.
208. How do you handle data consistency in a microservices
architecture?
Data consistency in microservices can be managed through approaches like event
driven architecture, using eventual consistency, and implementing transactional
outbox patterns where database transactions and event publishing are done
atomically.
209. What is containerization and how does it benefit microservices?
Containerization involves encapsulating an application and its environment into a
container that can be run on any platform. It benefits microservices by ensuring
consistency across environments, facilitating scalability, and simplifying
deployment and operations.
210. Explain the concept of Blue/Green deployment in microservices.
Blue/Green deployment is a technique to reduce downtime and risk by running
two identical production environments called Blue and Green. Only one of the
environments is live at a time, where the Green environment is used to mirror the
Blue before it becomes live.
211. How do microservices communicate with each other?
Microservices communicate with each other using lightweight protocols such as
HTTP/REST, AMQP for messaging systems, or even gRPC for high-performance RPC
communication.
212. What is Domain-Driven Design (DDD) in microservices?
Domain-Driven Design is an approach to developing software for complex needs by
deeply connecting the implementation to an evolving model of the core business
concepts. It is used in microservices to divide systems into bounded contexts and
ensure each service models a specific domain.
213. How does microservices architecture handle security?
Security in microservices is handled through patterns like authentication gateways,
129
securing service to-service communication through protocols like HTTPS and
OAuth2, and using JSON Web Tokens (JWT) for maintaining secure and scalable
user access control.
214. Explain the use of Observability in microservices.
Observability in microservices involves monitoring and tracking the internal states
of systems by using logs, metrics, and traces. This helps in understanding system
performance and troubleshooting issues in a distributed system.
215. What is the role of a configuration server in microservices?
A configuration server manages external configuration properties for applications
in a microservice architecture. This allows for easier maintenance of service
configurations without the need to redeploy or restart services when
configurations change.
216. How do you ensure fault tolerance in microservices?
Fault tolerance in microservices can be ensured by implementing patterns such as
Circuit Breaker, Failover, Retry mechanisms, and using Rate Limiters to prevent
system overload.
217. What is a Saga pattern in microservices?
The Saga pattern is a way to manage data consistency across microservices by
using a sequence of local transactions. Each local transaction updates data within
a single service and publishes an event or message to trigger the next local
transaction in the saga.
218. What is an anti-corruption layer in microservices?
An anti-corruption layer is a component that translates between different
subsystems in a microservices architecture, protecting each service from changes
in other services. This layer helps maintain independent and decoupled service
development.
219. Explain how microservices can be scaled.
Microservices can be scaled horizontally by adding more instances of the services
to handle increased load, or vertically by adding more resources like CPU or
memory to existing instances. This can be dynamically managed using
orchestration tools like Kubernetes.
220. What is Event Sourcing in microservices?
130
Event Sourcing is a pattern where the state of a business entity is stored as a
sequence of state changing events. Whenever the state of a business entity needs
to be determined, these events are replayed to achieve the current state. This is
useful in microservices for ensuring all changes are captured and can be
reconstructed in case of failures.
In my application there is four way to login like Continue with Google, Continue
with Facebook, Continue with Apple, Continue with Email( Traditional Way).
When We are going to login with Continue with Email then we have to pass the
First Name, Last Name, DOB, Email Id, Password. after filling these things when
user hit the SignUp botton. We internally Validate User Input and Encrypt the
Password and Save the User Details in Database. And We Generate JWT Token
Using User Details And We send JWT Token to client in the response request. In
client Side These Token is store in HTTP-only-cookies. So It then automatically uses
the token for subsequent authenticated requests.
We Use HTTP-only cookies for better security (prevents token theft via XSS).
Spring Security
When we add the dependency of spring security in POM.xml then it automatically
secure all the end point. So for customize it we Create we SecurityConfig Class and
Annotate it with " @Configuration, @EnableWebSecurity, @EnableMethodSecurity "
For Authentication we use JWT Token. also For Authorization for role based ...... .
JWT Implementation
JSON Web Tokens (JWT) are compact, URL-safe tokens used for securely
transmitting information between parties as a JSON object. They are commonly
used for authentication and information exchange.
The token is mainly composed of header, payload, signature. These three parts are
separated by dots(.).
1. Header: The header typically consists of two parts: the type of the token,
which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.
133
For example:
{
"alg": "HS256",
"typ": "JWT"
}
Then, this JSON is Base64Url encoded to form the first part of the JWT.
2. Payload : The second part of the token is the payload, which contains the
claims. Claims are statements about an entity (typically, the user) and additional
data. There are three types of claims: registered, public, and private claims.
1. Registered claims: These are a set of predefined claims which are not
mandatory but recommended, to provide a set of useful, interoperable
claims. Some of them are: iss (issuer), exp (expiration time), sub (subject),
aud (audience), and others.
2. Public claims: These can be defined at will by those using JWTs. But to avoid
collisions they should be defined in the IANA JSON Web Token Registry or be
defined as a URI that contains a collision resistant namespace.
3. Private claims: These are the custom claims created to share information
between parties that agree on using them and are neither registered or
public claims.
The payload is then Base64Url encoded to form the second part of the JSON Web
Token.
3. Signature : To create the signature part you have to take the encoded header,
the encoded payload, a secret, the algorithm specified in the header, and sign that.
The signature is used to verify the message wasn't changed along the way, and, in
134
the case of tokens signed with a private key, it can also verify that the sender of
the JWT is who it says it is.
⦁ AuthenticationManager Interface
⦁ Used to authenticate the user's credentials via the authenticate() method.
⦁ This internally uses an AuthenticationProvider (e.g.,
DaoAuthenticationProvider, JwtAuthenticationProvider) to validate the
user's credentials.
⦁ If authentication is successful (authenticated = true), the JWT token is
generated.
⦁ JWTUtility Class
A utility class containing methods for creating and validating JWT tokens:
⦁ For Creating Tokens:
⦁ generateToken(): Generates a new token with user-specific claims.
⦁ createToken(): Builds the JWT with signing keys and claims.
⦁ getSignKey(): Returns the secret key used for signing the token.
135
2. Validating JWT Token
⦁ JwtAuthFilter Class
⦁ Extends OncePerRequestFilter to ensure token validation logic is executed
exactly once for each incoming HTTP request.
⦁ Responsible for extracting and validating the JWT token from the request.
⦁ doFilterInternal() Method
⦁ Step 1: Extracts the token from the Authorization header.
⦁ Step 2: Validates the token using JWTUtility.
⦁ Checks the signature, expiration, and claims.
⦁ If valid, retrieves the username and loads the user details using
UserDetailsService.
⦁ Step 3: Sets the Authentication object in the SecurityContextHolder to
authenticate the user for the current request.
⦁ If the token is invalid, rejects the request and sends an error response.
NOTE:-
Server-Side: The Spring Boot server handles generating, issuing, and validating
JWT tokens.
Client-Side: The mobile app securely stores and uses these tokens to authenticate
requests to the server. This involves using secure storage mechanisms provided by
the operating system, such as EncryptedSharedPreferences , KeyStore on Android.
136
################### DEPLOYMENT PROCESS #################
⦁ ...........................................................
⦁ .....................................................................
7. Deployment to Kubernetes
⦁ Kubernetes Manifests (YAML files) define the deployment and service for the
application
8. ELK Stack for Logging and Monitoring
⦁ Use Elasticsearch, Logstash, and Kibana (ELK) for centralized logging and
monitoring.
⦁ Configure the application and Kubernetes pods to send logs to ELK.
139
⦁ Used in integration tests.
⦁ Part of Spring Boot testing framework.
⦁ Automatically managed by Spring context.
⦁ Scope is within the Spring application context, allowing for injection into
other Spring-managed beans.
Choose @Mock when writing unit tests and you want to mock dependencies
within the test class.
Choose @MockBean when writing integration tests and you want to mock beans
within the Spring application context.
Mockito.when(userService.createUser(Mockito.any())).thenReturn(d
to);
//actual request for url
this.mockMvc.perform(MockMvcRequestBuilders.post("/users")
.contentType(MediaType.APPLICATION_JSON)
.content(convertObjectToJsonString(user)
)
.accept(MediaType.APPLICATION_JSON))
.andDo(print())
.andExpect(status().isCreated())
.andExpect(jsonPath("$.name").exists());
}
@Test
140
public void updateUserTest() throws Exception {
Mockito.when(userService.updateUser(Mockito.any(),
Mockito.anyString())).thenReturn(dto);
this.mockMvc.perform(
MockMvcRequestBuilders.put("/users/" +
userId)
.header(HttpHeaders.AUTHORIZATION,
"Bearer
eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJkdXJnZXNoQGRldi5pbiIsImlhdCI6MTY
3NTI0OTA0MywiZXhwIjoxNjc1MjY3MDQzfQ.HQbZ4BrQlAgd5X40RZJhSMZ0zgZA
fDcQtxJaSy97YZHgdNBV0g2r7-ZXRmw1EkKhkFtdkytG_E6I7MnsxVEZqg")
.contentType(MediaType.APPLICATION_J
SON)
.content(convertObjectToJsonString(u
ser))
.accept(MediaType.APPLICATION_JSON)
)
.andDo(print())
.andExpect(status().isOk())
.andExpect(jsonPath("$.name").exists());
}
MockMvc Components
⦁ MockMvcRequestBuilders:
This is a factory class for creating RequestBuilder instances for different HTTP
methods (GET, POST, PUT, DELETE, etc.).
⦁ MockMvcResultMatchers:
This is a factory class for creating ResultMatcher instances to verify response
status, headers, content, and more.
⦁ MockMvcResultHandlers:
141
This is a factory class for creating ResultHandler instances to perform actions
on the result, such as printing the response
Mockito.when(roleRepository.findById(Mockito.anyString())).thenReturn(Optio
nal.of(role));
UserDto user1 = userService.createUser(mapper.map(user,
UserDto.class));
// System.out.println(user1.getName());
Assertions.assertNotNull(user1);
Assertions.assertEquals("Durgesh", user1.getName());
}
@Test
142
public void deleteUserTest() {
String userid = "userIdabc";
Mockito.when(userRepository.findById("userIdabc")).thenReturn(Optional.of(u
ser));
userService.deleteUser(userid);
Mockito.verify(userRepository, Mockito.times(1)).delete(user);
}
}
PowerMock: Used when more advanced features are needed, such as mocking
static methods, constructors, and private methods. It is more powerful but also
more complex and should be used cautiously.
@RunWith(PowerMockRunner.class)
@PrepareForTest(Utils.class)
public class MyServiceTest {
@InjectMocks
private MyService myService;
@Test
public void testProcess() {
// Mock the static method
PowerMockito.mockStatic(Utils.class);
when(Utils.staticMethod("test")).thenReturn("Mocked value");
144
2. Swagger Editor:
Ek online tool ya local installation jo developers ko OpenAPI specifications create
aur edit karne deta hai.
Real-time feedback aur error checking provide karta hai taaki API specification sahi
tarike se format ki gayi ho.
3. Swagger UI:
Ek web-based interface jo OpenAPI Specification se automatic documentation
generate karta hai.
Users ko browser se directly API ke saath interact karne deta hai, alag-alag
endpoints aur methods test karne ka mauka milta hai.
4. Swagger Codegen:
Ek tool jo OpenAPI Specification se client libraries, server stubs, API
documentation, aur configuration automatically generate karta hai.
5. SwaggerHub:
Ek collaborative platform jo APIs ko design, document, aur manage karne ke liye
use hota hai.
Teams ko APIs par kaam karne ke liye ek centralized jagah provide karta hai, jisse
development lifecycle ke dauran consistency aur collaboration ensure hoti hai.
147
⦁ Get your Twilio Account SID, Auth Token, and WhatsApp-enabled phone
number.
Step 2: Add Twilio SDK to Your Spring Boot Project
Step 3: Configure Twilio Credentials
⦁ Add your Twilio credentials to the application.properties file:
⦁ twilio.account-sid=your-account-sid
⦁ twilio.auth-token=your-auth-token
⦁ twilio.whatsapp-number=whatsapp:+14155238886 # Twilio's sandbox
WhatsApp number
Step 4: Send a WhatsApp Message
⦁ Create a service to send WhatsApp messages:
⦁ Initialize Twilio with credentials & Create and send an SMS message
Step 5: Test the Integration
⦁ Create a REST controller to trigger the service:
148
⦁ Expose an endpoint to send SMS messages:
149
private String stripeApiKey;
@PostConstruct
public void init() {
Stripe.apiKey = stripeApiKey;
}
}
Step 5: Create a service class to handle payment processing:
@Service
public class PaymentService {
PaymentIntent paymentIntent =
PaymentIntent.create(params);
return paymentIntent;
}
}
Step 6: Create a controller to handle payment requests:
@RestController
@RequestMapping("/api/payment")
public class PaymentController {
@Autowired
private PaymentService paymentService;
@PostMapping("/create-payment-intent")
public PaymentIntent
createPaymentIntent(@RequestParam int amount,
@RequestParam String currency, @RequestParam String
paymentMethodType) throws StripeException {
return
paymentService.createPaymentIntent(amount, currency,
paymentMethodType);
}
}
Step 7: Handle Payment on Frontend
150
################### Project Questions ######################
1. Explain where you use List, set , map in your Project.
A. List :- List is an ordered collection that allows duplicate elements. It is typically
used when the order of elements is important or when duplicates are allowed.
# Usages in HostelWorld.com:-
⦁ Search results: When customers search for hotels, the results can be stored
in a List to maintain the order based on relevance or any applied sorting
criteria.
⦁ Customer reviews: A List of review objects can store multiple reviews for a
single hotel, preserving the order in which reviews were added.
⦁ Maintaining User Preferences: Store a list of user preferences or recently
viewed items in the order they were accessed.
B. Set:- Set is an unordered collection that does not allow duplicate elements. It is
used when uniqueness is a key requirement.
# Usages in HostelWorld.com:-
⦁ Unique customer IDs: A Set can store customer IDs to ensure there are no
duplicates.
⦁ Cities with available hotels: To keep track of all cities where hotels are
available without duplicates.
⦁ Storing Unique Search Keywords: Collect unique search keywords entered by
users to analyze popular search terms.
C. Map :- Map is a collection that maps keys to values, with no duplicate keys
allowed. It is used for fast lookups based on unique keys.
# Usages in HostelWorld.com:-
⦁ Hotel details: A Map where the key is the hotel ID and the value is the hotel
object can quickly retrieve hotel information.
⦁ Booking history: A Map where the key is the customer ID and the value is a
list of bookings to quickly find all bookings made by a customer.
⦁ Customer information: A Map where the key is the customer ID and the
value is the customer object for fast access to customer details.
151
2. Explain where you use HashMap and ConcurrentHashMap Your
Project.
A. HashMap :- HashMap is best suited for scenarios where you need a fast, non-
thread-safe collection for storing key-value pairs. It is ideal for read-heavy
operations where concurrency is not a concern.
# Usages in HostelWorld.com:-
We can use HashMap to store static data like room types, hotel categories, or
payment statuses, which are read frequently but rarely modified.
152
APIs, hotel databases) concurrently to improve the performance of search
operations.
154
1. Singleton Design Pattern
It ensures a class only has one instance, and provides a global point of access to it.
Static Factory Method: A crucial aspect of the Singleton pattern is the presence of
a static factory method. This method acts as a gateway, providing a global point of
access to the Singleton object. When someone requests an instance, this method
either creates a new instance (if none exists) or returns the existing instance to the
caller.
2. Declare all the constructor as private, So that its object cannot be created from
outside of the class using new keyword.
3. Develop a static final factory method, Which will return a new object only for
the first time and the same object will be returned then after. Since we have only
private constructor, we cannot use new keyword from outside of the program, we
must declare this method as static so that it can be accessed directly using Class
Name. Declare this final so that the child class will have no option to override and
change the default behavior.
155
create the object of that class. To prevent this declare an instance boolean variable
initially holding true. Change its value to false, immediately when constructor is
called for the first time.Then after when even the constructor is called for 2nd
time, it should throw SomeException saying object cannot be created for multiple
times. This approach also removes the Double Checking Problem in case. of
Multiple thread trying to create object at the same time, which we will discus later.
5. Make Your factory Method Thread Safety, So that Only one object is created
even if more than 1 thread tries to call this method simultaneously. Declare the
whole method as synchronized method, or use synchronized block, Instead of
making the whole factory method as synchronized method, it is good to place only
the condition check part in synchronized block.
we have a problem with the above approach, after the first call to the
getInstance(), in the next calls to the same getinstance() method,the method will
check for instance == null check, while doing this check, it acquires the lock to
verify the condition, which is not required. Acquiring and releasing locks are quiet
costly and we must try to avoid them as much as we can. To solve this problem we
can have double level checking (2 times null checking) for the condition.
Note: If you have used the Reflection Proof logic, then no need to worry about the
2" null check. Because when we call the constructor for 2nd time, it will throw
InstantiationError.
6. Prevent Your Singleton Object from De-serialization, If you need your singleton
object to send across the network, Your Sifigleton class must implement
Serializable interface.But problem with this approach is we can de-serialize it for N
number of times, and each deserialization process will create a brand new
object,which will violate the Singleton Design Pattern. In order to prevent multiple
object creation during deserialization process, override readResolve() and return
the same object. readResolve() method is called internally in the process of
deserialization. It is used to replace de-serialized object by your choice.
156
Note : Ignore this process if your class does not implement Serialization interface
or indirectly. Indirectly means the super class or super interfaces has not
implement/extended Serializable interface.
7. Prevent Your singleton Object being Cloning, If your class is directly child of
Object class, then I will suggest not to implement Clonable interface, as there is no
meaning of cloning the singleton object to produce duplicate objects out of it. Both
are opposite to each other. However if Your class is the child of some other class or
interface and that class or interface has implemented/extended Cloneable
interface, then it is possible that somebody may clone your singleton class thereby
creating many objects. We must prevent this as well. Override clone() in your
singleton class and return the same old object. You may also throw
CloneNotSupportedException.
157
}
// Rule 6: Prevent deserialization
protected Object readResolve() {
return getInstance();
}
// Rule 7: Prevent cloning
@Override
protected Object clone() throws CloneNotSupportedException {
throw new CloneNotSupportedException("Cloning is not
allowed for Singleton");
}
}
Advantages:
⦁ Single Instance ensures that only one instance of the class exists throughout
the application's lifetime.
⦁ Global Access provides a centralized point for accessing the instance,
facilitating easy communication and resource sharing.
Disadvantages:
⦁ Global State: Can introduce global state, affecting testability.
⦁ Limited Extensibility: Hard to subclass or mock for testing.
⦁ Violates Single Responsibility Principle: Combines instance management
with class logic.
Examples:
⦁ Logging: Centralized logging across the application.
⦁ Database Connection Pool: Managing shared database connections.
⦁ Caching: Maintaining a single cache instance.
⦁ Configuration Management: Global application settings.
⦁ Thread Pools: Managing a limited set of worker threads.
159
⦁ No Discount
Instead of using multiple if-else conditions, we can use the Strategy Pattern.
161
The Saga Pattern is a distributed transaction management pattern that ensures
data consistency across multiple microservices. Instead of a single database
transaction, a saga represents a sequence of steps, where each step is either
committed or compensated (rolled back) if something fails.
162
publishes either:
✅ room-available → If rooms are available, the process continues.
❌ room-not-available → Booking is canceled.
4. Payment Service listens to booking-created, processes the payment, and
publishes either:
✅ payment-processed → If successful, the process continues.
❌ payment-failed → The refund process is triggered.
5. If payment is successful, Inventory Service reserves a room (room-reserved
event).
6. Loyalty Service listens for room-reserved and adds customer reward points
(loyalty-points-added).
7. Notification Service sends a confirmation message (booking-confirmed).
8. Audit Service logs all events.
9. If payment fails, the Refund Service handles refunds (refund-processed).
Advantages of Choreography-Based Saga
⦁ Decentralized Coordination : No single point of failure
⦁ Scalability : Each microservice operates independently
⦁ Fault Tolerance : If one service fails, others continue processing
⦁ Loose Coupling : Microservices only communicate via events
⦁ Resilience : Compensating transactions can reverse failed processes
163
KT: Knowledge Transfer
EOD: End Of Day
DND: Do Not Disturb
SME: Subject Matter Expert
POC: Proof Of Concept (or) Point Of Contact (context-specific)
QQ: Quick Question
BRB: Be Right Back
IMO: In My Opinion
IDK: I Don’t Know
OOTB: Out Of The Box
KPI: Key Performance Indicator
FYR: For Your Reference
WIP: Work In Progress
TBD: To Be Determined
TBA: To Be Announced
TL;DR: Too Long; Didn’t Read
ETA: Estimated Time of Arrival
AFK : Away From Keyword
164