Business Intelligence
Business Intelligence
available data to generate information and knowledge, which are essential for supporting complex decision-making
processes.
1. Analysis Phase
Problem Identification: The first step is recognizing and clearly defining the problem at hand. Decision-makers
form a mental representation of the issue by identifying critical factors.
Investigative Flexibility: Business intelligence methodologies, like multidimensional data cubes (discussed in
Chapter 3), provide the tools to explore different investigative paths. Decision-makers can flexibly adjust their
hypotheses as new insights emerge.
Interactive Exploration: By using interactive tools, decision-makers can ask questions and get quick responses,
refining their understanding in a dynamic and iterative way.
2. Insight Phase
Deep Understanding: In this phase, decision-makers go beyond surface-level observations to gain a deeper
understanding of the problem, often at a causal level. For example, after identifying a trend (e.g., customers
discontinuing an insurance policy), decision-makers look for common characteristics or profiles of the affected
group.
Knowledge Extraction:
o Insight may stem from intuition and experience, using unstructured data and personal knowledge.
o Alternatively, structured data can be analyzed using inductive learning models to derive patterns and
trends.
3. Decision Phase
Actionable Knowledge: The insights gained from the previous phase are translated into decisions. These
decisions, enabled by the faster analysis process provided by BI tools, lead to timely and effective actions.
Reduction of Cycle Time: The availability of business intelligence tools accelerates the entire cycle, allowing
organizations to reduce the time between analysis, decision, and action. This enhances the quality of the
decision-making process and aligns better with organizational strategy.
4. Evaluation Phase
Performance Measurement: After actions are implemented, the final phase involves assessing the effectiveness
of the decisions. Performance metrics should not focus solely on financial outcomes but also include other key
performance indicators (KPIs) relevant to different departments.
Comprehensive Evaluation: As described , advanced methodologies enable comprehensive performance
evaluations, providing a holistic view of the organization’s success across various dimensions.
1. Technologies
Advanced Hardware and Software:
o The growth in computing capabilities, with microprocessors improving by an average of 100% every 18
months, has made advanced BI systems feasible.
o Reduced costs of technology enable the implementation of inductive learning methods and
optimization models, ensuring reasonable processing times.
Data Visualization:
o Cutting-edge graphical visualization techniques, including real-time animations, enhance data
representation, making it easier for decision-makers to understand complex information.
Mass Storage Capacity:
o The exponential increase in mass storage capabilities at decreasing costs allows organizations to store
terabytes of data, which is essential for effective BI systems.
Network Connectivity:
o The establishment of Extranets and Intranets facilitates the flow of information and knowledge
extracted from BI systems, improving communication and data accessibility across departments.
Integration of Technologies:
o The ability to integrate hardware and software from different suppliers or developed internally is crucial
for the effective deployment of data analysis tools.
2. Analytics
Role of Mathematical Models:
o Mathematical models and analytical methodologies are vital for enhancing information and extracting
knowledge from data within organizations.
o While data visualization supports decision-making, it is considered a passive form of support.
Active Analytical Models:
o To provide more substantial support, it is essential to implement advanced models of inductive learning
and optimization techniques. These allow organizations to actively analyze data, derive insights, and
improve the decision-making process.
3. Human Resources
Organizational Competencies:
o The success of a BI system depends significantly on the skills and competencies of the individuals within
the organization. This collective knowledge forms the organizational culture.
Knowledge Workers' Impact:
o Knowledge workers’ abilities to acquire information and translate it into practical actions greatly
influence the quality of decision-making.
o Even with an advanced BI system, the effectiveness of analyses and interpretation of results depends on
the skills and creativity of the human resources involved.
Mental Agility and Adaptability:
o Companies that foster an environment where knowledge workers possess mental agility and are open to
changing their decision-making styles will have a competitive advantage.
o The willingness to embrace change and utilize analytical tools effectively can lead to innovative solutions
and successful action plans.
2. Design
1. Architecture Planning:
o Create a flexible plan for future growth and changes.
2. Infrastructure Assessment:
o Review existing systems to see what needs upgrading or building.
3. Decision-Making Processes:
o Analyze the processes BI will support to understand data needs.
4. Project Planning:
o Set up a clear plan with:
Phases
Priorities
Timeline & Costs
Roles & Resources
3. Planning
Defining Functions:
o This stage includes a detailed definition of the functions of the BI system, ensuring all requirements are
captured.
Data Assessment:
o Existing data is evaluated alongside potential external data sources to identify what can be integrated
into the BI architecture.
Designing Information Structures:
o The information structures, including a central data warehouse and possibly satellite data marts, are
designed based on the assessed data.
Mathematical Models:
o The mathematical models necessary for data analysis are defined, ensuring all required data is available
and that algorithms are efficient.
Prototyping:
o A low-cost, limited-capability system prototype is created to identify discrepancies between actual
needs and project specifications early in the development process.
Purpose and Architecture: A data warehouse serves as a semantically consistent data store for strategic
decision-making. It integrates data from multiple sources to support structured and ad hoc queries, analytical
reporting, and overall decision-making processes.
Data Warehousing Process: The construction and utilization of data warehouses involve data cleaning,
integration, and consolidation. Decision support technologies enable knowledge workers (e.g., managers,
analysts) to efficiently access data and make informed decisions.
Applications of Data Warehousing
1. Customer Focus: Analyze customer preferences and behavior.
2. Product Management: Track product sales and performance over time.
3. Operational Analysis: Identify profit sources in operations.
4. Customer Relationship Management: Adapt strategies based on customer data.:
o Bottom Tier:
Warehouse Database Server: Typically a relational database system that serves as the core data
repository.
Back-End Tools and Utilities: These components extract data from operational databases and
external sources (e.g., customer profiles). They perform essential functions such as data
extraction, cleaning, and transformation to unify data from different sources.
Loading and Refresh Functions: These processes update the data warehouse to ensure that it
contains current information.
Gateways: Application program interfaces (APIs) that facilitate data extraction, allowing client
programs to execute SQL code on the server. Examples include:
ODBC (Open Database Connectivity)
OLEDB (Object Linking and Embedding Database) by Microsoft
JDBC (Java Database Connectivity)
Metadata Repository: This component stores critical information about the data warehouse and
its contents, providing context and facilitating data management.
o Middle Tier:
OLAP Server: This tier is responsible for data analysis and is typically implemented using one of
two models:
Relational OLAP (ROLAP): An extended relational DBMS that maps operations on
multidimensional data to standard relational operations.
Multidimensional OLAP (MOLAP): A special-purpose server designed to directly
implement multidimensional data and operations, optimizing analytical performance.
OLAP Servers Discussion: Further details about OLAP servers are covered in Section 4.4.4,
highlighting their role in data analysis.
o Top Tier:
Front-End Client Layer: This layer includes various tools that facilitate user interaction with the
data warehouse. It typically contains:
Query and Reporting Tools: Allow users to retrieve and present data in a meaningful
way.
Analysis Tools: Tools for conducting in-depth analysis of the data.
Data Mining Tools: Techniques for exploring data patterns, trends, and predictions (e.g.,
trend analysis, forecasting).
Data Warehouse Models: Enterprise Warehouse, Data Mart, and Virtual Warehouse
Overview: There are three primary data warehouse models from an architectural perspective: the enterprise
warehouse, the data mart, and the virtual warehouse. Each model serves different organizational needs and
provides varying levels of data integration and accessibility.
Models:
o Enterprise Warehouse:
Definition: An enterprise warehouse encompasses all information across the organization,
providing comprehensive corporate-wide data integration from multiple operational systems and
external information sources.
Scope: Cross-functional, containing both detailed and summarized data.
Size: Ranges from a few gigabytes to terabytes or more.
Implementation: Can be built on traditional mainframes, computer superservers, or parallel
architecture platforms.
Design Process: Requires extensive business modeling and can take years to design and
construct.
o Data Mart:
Definition: A data mart is a subset of corporate-wide data tailored for a specific group of users,
focusing on selected subjects relevant to that group.
Examples: A marketing data mart may concentrate on data related to customers, items, and
sales.
Data Characteristics: Typically contains summarized data, making it easier to analyze.
Implementation: Often deployed on low-cost departmental servers using Unix/Linux or
Windows, with implementation cycles measured in weeks rather than months or years.
Types:
Independent Data Marts: Sourced from data captured in operational systems, external
providers, or locally generated data within specific departments.
Dependent Data Marts: Directly sourced from enterprise data warehouses.
o Virtual Warehouse:
Definition: A virtual warehouse consists of a set of views over operational databases that
provide an efficient means of querying data.
Materialization: Only select summary views may be materialized for effective query processing.
Construction: Easy to build but demands excess capacity on operational database servers.
Development Approaches:
o Top-Down Approach:
Advantages: Provides a systematic solution, minimizing integration issues.
Disadvantages: High cost, lengthy development time, and limited flexibility due to the challenge
of establishing a consistent data model across the organization.
o Bottom-Up Approach:
Advantages: Offers flexibility, low cost, and rapid return on investment.
Disadvantages: May result in challenges when integrating various independent data marts into a
cohesive enterprise data warehouse.
Recommended Development Method:
o Incremental and Evolutionary Implementation:
Initial Step: Define a high-level corporate data model within a short timeframe (one or two
months) to ensure a consistent, integrated view of data across various subjects.
Second Step: Implement independent data marts in parallel with the enterprise warehouse
based on the established corporate data model.
Third Step: Construct distributed data marts to integrate various data marts using hub servers.
Final Step: Build a multitier data warehouse where the enterprise warehouse acts as the sole
custodian of all warehouse data, distributing it to various dependent data marts.
ETL Tools
Overview: ETL (Extract, Transform, Load) tools are software applications designed to automate three primary
functions: extraction, transformation, and loading of data into a data warehouse. These tools play a crucial role
in data warehousing by ensuring that the data is efficiently collected, cleaned, and made ready for analysis.
Functions:
o Extraction:
Definition: The first phase involves extracting data from various internal and external sources.
Initial vs. Incremental Extraction:
Initial Extraction: Involves populating the empty data warehouse with all historical data.
Incremental Extraction: Involves updating the data warehouse with new data as it
becomes available over time.
Data Selection: The selection of data for import is guided by the data warehouse design, which
is influenced by the information requirements of business intelligence analyses and decision
support systems relevant to specific application domains.
o Transformation:
Goal: The purpose of the transformation phase is to enhance the quality of the extracted data by
addressing inconsistencies, inaccuracies, and missing values.
Common Issues Addressed:
Inconsistencies: Corrections made for discrepancies between values recorded in
different attributes that share the same meaning.
Data Duplication: Removal of duplicate records.
Missing Data: Identification and handling of missing data points.
Inadmissible Values: Addressing the presence of unacceptable or invalid values.
Cleaning Process:
Automatic Rules: Predefined rules are applied to correct recurring errors.
Dictionaries: Valid term dictionaries are used to replace incorrect terms based on
similarity levels.
Additional Data Conversions:
Ensures homogeneity and integration among various data sources.
Involves data aggregation and consolidation to generate summaries, improving response
times for subsequent queries and analyses.
o Loading:
Definition: The final phase where extracted and transformed data is loaded into the tables of the
data warehouse.
Purpose: Makes the data readily accessible to analysts and decision support applications,
facilitating data-driven decision-making.
Metadata Repository
Definition: Metadata refers to data about data, serving as a crucial component in a data warehouse. It defines
and describes warehouse objects and their characteristics, providing context and meaning to the data stored
within the warehouse.
Role in Data Warehousing:
o Storage Location: Metadata is maintained within the metadata repository, typically located in the
bottom tier of the data warehousing architecture.
o Creation: Metadata is generated for various aspects of the data warehouse, including:
Data names and definitions.
Timestamps for extracted data.
Source information for extracted data.
Missing fields added during data cleaning or integration processes.
Contents of a Metadata Repository:
1. Data Warehouse Structure:
Description: Includes warehouse schema, views, dimensions, hierarchies, derived data
definitions, and details about data mart locations and their contents.
2. Operational Metadata:
Data Lineage: Tracks the history of migrated data and the sequence of transformations applied.
Currency of Data: Indicates whether data is active, archived, or purged.
Monitoring Information: Contains warehouse usage statistics, error reports, and audit trails.
3. Summarization Algorithms:
Definition Algorithms: Includes measures and dimension definitions, data granularity, partitions,
subject areas, aggregation, summarization, and predefined queries and reports.
4. Mapping Information:
Operational to Warehouse Mapping: Covers source databases and their contents, gateway
descriptions, data partitions, data extraction rules, cleaning and transformation rules, data
refresh and purging rules, and security protocols (user authorization and access control).
5. System Performance Data:
Indices and Profiles: Enhance data access and retrieval performance, along with rules for the
timing and scheduling of refresh, update, and replication cycles.
6. Business Metadata:
Terms and Definitions: Includes business-specific terms, data ownership information, and
charging policies.
Levels of Data Summarization: A data warehouse contains various levels of data summarization, of which
metadata is one aspect. Other levels include:
o Current detailed data (usually stored on disk).
o Older detailed data (often archived on tertiary storage).
o Lightly summarized data.
o Highly summarized data (may or may not be physically stored).
Importance of Metadata:
o Directory Function: Metadata serves as a directory for decision support system analysts, helping them
locate the contents of the data warehouse.
o Data Mapping Guide: Provides guidance for mapping data when transforming it from the operational
environment to the data warehouse environment.
o Summarization Guidance: Acts as a reference for the algorithms used for summarizing data at different
levels (e.g., from current detailed data to lightly summarized data).
o Persistent Storage: Metadata should be stored and managed persistently, typically on disk, to ensure its
availability and reliability.
OLAP stands for Online Analytical Processing Server. It is a software technology that allows users to analyze information
from multiple database systems at the same time. It is based on multidimensional data model and allows the user to
query on multi-dimensional data (eg. Delhi -> 2018 -> Sales data). OLAP databases are divided into one or more cubes
and these cubes are known as Hyper-cubes.
OLAP operations:
There are five basic analytical operations that can be performed on an OLAP cube:
1. Drill down: In drill-down operation, the less detailed data is converted into highly detailed data. It can be done
by:
2. Roll up: It is just opposite of the drill-down operation. It performs aggregation on the OLAP cube. It can be done
by:
In the cube given in the overview section, the roll-up operation is performed by climbing up in the concept hierarchy
of Location dimension (City -> Country).
3. Dice: It selects a sub-cube from the OLAP cube by selecting two or more dimensions. In the cube given in the
overview section, a sub-cube is selected by selecting following dimensions with criteria:
5. Pivot: It is also known as rotation operation as it rotates the current view to get a new view of the
representation. In the sub-cube obtained after the slice operation, performing pivot operation gives a new view
of it.
Schema is a logical description of the entire database. It includes the name and description of records of all record types
including all associated data-items and aggregates. Much like a database, a data warehouse also requires to maintain a
schema. A database uses relational model, while a data warehouse uses Star, Snowflake, and Fact Constellation schema.
In this chapter, we will discuss the schemas used in a data warehouse.
Star Schema
Each dimension in a star schema is represented with only one-dimension table.
The following diagram shows the sales data of a company with respect to the four dimensions, namely time,
item, branch, and location.
There is a fact table at the center. It contains the keys to each of four dimensions.
The fact table also contains the attributes, namely dollars sold and units sold.
Note − Each dimension has only one dimension table and each table holds a set of attributes. For example, the location
dimension table contains the attribute set {location_key, street, city, province_or_state,country}. This constraint may
cause data redundancy. For example, "Vancouver" and "Victoria" both the cities are in the Canadian province of British
Columbia. The entries for such cities may cause data redundancy along the attributes province_or_state and country.
Snowflake Schema
Unlike Star schema, the dimensions table in a snowflake schema are normalized. For example, the item
dimension table in star schema is normalized and split into two dimension tables, namely item and supplier
table.
Now the item dimension table contains the attributes item_key, item_name, type, brand, and supplier-key.
The supplier key is linked to the supplier dimension table. The supplier dimension table contains the attributes
supplier_key and supplier_type.
Note − Due to normalization in the Snowflake schema, the redundancy is reduced and therefore, it becomes easy to
maintain and the save storage space.
A fact constellation has multiple fact tables. It is also known as galaxy schema.
The following diagram shows two fact tables, namely sales and shipping.
The shipping fact table has the five dimensions, namely item_key, time_key, shipper_key, from_location,
to_location.
The shipping fact table also contains two measures, namely dollars sold and units sold.
It is also possible to share dimension tables between fact tables. For example, time, item, and location dimension
tables are shared between the sales and shipping fact table.
Dimensional Analysis
1. Overview of Multidimensional Analysis:
o Multidimensional analysis enhances the pivot table functionality found in desktop spreadsheet tools.
o OLAP tools provide the capability to "slice and dice" relationships between different variables across
various levels of their hierarchies.
2. Examples of Analysis:
o Analysts can review data in various dimensions:
Item sales by time period by region: Analyzing sales data for different items over specified time
periods and across various regions.
Product availability by product classification by supplier by location: Examining product
availability categorized by classification, supplier, and location.
3. Data Viewing and Grouping:
o The use of the term "by" indicates pivot points for viewing data:
Data can be grouped by various hierarchies, allowing for flexible analysis.
For example, data can be viewed by item classification and then by time period and region, or
conversely, by region and then by time period.
4. Drill Up and Down Functionality:
o OLAP enables analysts to drill up and down along hierarchical dimensions to reveal hidden relationships.
o This functionality allows users to explore different levels within a dimension, enhancing the depth of
analysis.
5. OLAP Cube Structure:
o OLAP queries are organized around partial aggregations along different dimensions, typically structured
in an OLAP cube.
o This cube structure allows for rapid response to queries that involve slicing or dicing the data.
6. Slicing and Dicing:
o Slicing: Fixes one dimension's value while providing data for all other dimensions. For example, fixing the
region (Northeast) while reviewing item sales grouped by classification and time period.
o Dicing: Involves subselecting components of one or more dimensions, such as choosing specific item
classifications and presenting them by time period and location.
7. Drill-Through Capability:
o Users can drill through (or drill down) along different levels of a dimension's hierarchy.
o For instance, after selecting the Northeast region, analysts can review sales data at a more granular level,
such as by each state in the Northeast.
8. Presentation Layer in OLAP:
o OLAP environments align data along chosen dimensions and provide a palette for visualization.
o Users can pivot dimensions around each other and choose how to present data:
Data can be displayed in a grid format similar to standard reports.
Alternatively, data can be visualized through graphical components to enhance understanding.
9. Flexibility for Users:
o The slicing, dicing, and drill-through features of OLAP provide significant flexibility for:
Power users: Engaging in detailed data discovery.
Business users: Analyzing data to identify anomalies or search for patterns.
Alerts/Notifications
1. Purpose of Alerts:
o In many cases, users reviewing standard reports are only interested in one or two key variables.
o The focus is often on verifying if a specific value is within an expected range or determining if it is outside
that range and requires action.
2. Example Scenario:
o A national call center manager may regularly check average hold times by region:
If hold times are within the acceptable range (e.g., 30–60 seconds), no action is needed.
If hold times exceed a threshold (e.g., over 60 seconds), the manager must take action by
contacting the regional manager to investigate the issue.
3. Triggered Action Based on Specific Variables:
o In many business cases, action is only required when certain variables reach specific values.
o Instead of reviewing entire reports, users only need to be notified when critical thresholds are breached,
making full report reviews unnecessary.
4. Alerts as an Alternative to Full Reports:
o Alerts or notifications serve as an alternative to full reports by delivering only the actionable
knowledge.
o This approach focuses solely on critical variables and when they require action, allowing other
information to be ignored unless needed.
5. Suitability for Operational Environments:
o Alerts are particularly useful in operational environments, where timely information delivery is crucial.
o Notifications can be delivered through various methods, including:
Email
Instant messages
Direct messages through internal systems or social media platforms
Smartphones or other mobile devices
Radio transmissions
Visual cues, such as:
Scrolling message boards
Light banks
Visual consoles
6. Context-Driven Notifications:
o The method of notification can provide context; for example:
A flashing amber light not only delivers the message but also acts as the medium for the alert.
This approach enhances the delivery of critical information and minimizes the need for manually
inspecting reports.
7. Enabling Rapid Actions:
o By simplifying the delivery of critical information, alerts reduce the effort required to inspect key
variables.
o This method enables quicker responses to potential issues, allowing businesses to take rapid action
when necessary.
Visualization: Charts, Graphs, Widgets
1. Importance of Presentation:
o While previous sections focused on delivering analytical results, effective presentation methods are
crucial for conveying messages and prompting appropriate actions.
o Different visualization methods can enhance the comparison of analytical results.
2. Types of Visualizations:
o Line Chart:
Displays points connected by line segments on a grid.
Useful for showing trends over time (e.g., gas price changes over 36 months).
o Bar Chart:
Represents values with rectangles whose lengths correspond to the values.
Effective for comparing different values across contexts (e.g., average life expectancy in different
countries).
o Pie Chart:
A circular chart divided into sectors representing percentages of a whole.
Good for illustrating distributions within a single domain (e.g., owner-occupied homes by
ethnicity).
o Scatter Plot:
Graphs points to show relationships between two variables (independent and dependent).
Helps identify correlations (e.g., age vs. weight).
o Bubble Chart:
A variation of scatter plots where the size of the bubble represents a third variable.
Useful for displaying multi-variable relationships (e.g., sales volume by items sold with market
share represented by bubble size).
o Gauge:
Indicates magnitude within critical value ranges.
Ideal for conveying the status of critical variables (e.g., fuel gauge in a car).
o Directional Indicators (Arrows):
Used for comparing current values to previous ones, indicating improvement, stability, or
decline.
Often utilized in stock price presentations.
o Heat Map:
Tiles a two-dimensional space with varying sizes and colors to display multiple values.
Highlights specific data points effectively (e.g., clicks on webpage links).
o Spider or Radar Chart:
Displays multiple variable values across dimensions, with each axis representing a variable.
Facilitates quick comparisons between different observations (e.g., product characteristics).
o Sparkline:
Small line graphs that lack axes and coordinates.
Useful for relative trend comparisons (e.g., stock price trends across companies).
Bar Chart
Structure: A bar chart uses rectangles (bars) whose lengths correspond to the values being represented.
Purpose: Bar charts are effective for comparing different values of the same variable across various contexts.
Example: An example given is a chart illustrating the average life expectancy in years across different countries.
Visualization: The focus is on comparing the height or length of the bars to understand differences in values.
Pie Chart
Structure: A pie chart is represented as a circle divided into sectors, with each sector representing a percentage
of the whole.
Purpose: Pie charts are good for showing distributions of values across a single domain, highlighting the relative
size of parts to a whole.
Example: An example given is displaying the relative percentages of owner-occupied homes by ethnicity within a
Zip code area.
Visualization: The emphasis is on how each slice represents a percentage of the total, with all components
adding up to 100%.
Don’t let the shiny graphics fool you into using a visual component that does not properly convey the
intended result.
For example, line charts are good for depicting historical trends of the same variable over time, but bar
charts may not be as good a choice.
The available screen space limits what can be displayed at one time, and this is referred to as screen
"real estate."
Different delivery channels allow different amounts of real estate. A regular desktop screen affords more
presentation area than a laptop screen, which in turn is larger than a portable tablet or smartphone.
Consider the channel and the consumer when employing visualization components, ensuring they fit
within the available space yet still deliver actionable knowledge.
Maintain context
Recognize that the presentation of a value is subject to variant interpretations when there is no external
context defining its meaning.
For example, a dial-gauge can convey the variable’s magnitude but not whether the value is good, bad,
or indifferent.
Adding a red zone (for bad values) and a green zone (for good values) provides context to the displayed
magnitude.
Be consistent
When self-service dashboard development is in the hands of many data consumers, biases can lead to
varied ways of representing the same or similar ideas.
Consistent representations and selections of standard visualization graphics will help ensure consistent
interpretations across different users.
Keep it simple
Avoid inundating the presentation with fancy-looking graphics that don’t add value to the decision-
making process.
Often, the simpler the presentation, the more easily the content is conveyed.
Engage
Engage the user community to agree on standards, practices, and create a guidebook.
Layering Information: It enables different data sets to be layered on a map, such as combining weather patterns
with insurance risk areas, helping organizations assess potential risks effectively.
2. Interactive Exploration
Drill-Down Features: Users can interact with the map, clicking on specific areas to see more detailed
information. This helps analysts focus on specific regions and uncover insights that may not be apparent in
regular reports.
Dynamic Updates: When users select different data in related charts or tables, the map can automatically
update to reflect those changes, allowing for real-time analysis.
3. Risk Management
Assessing Hazards: Geographic visualization can show risk factors, like areas prone to natural disasters, helping
businesses adjust their strategies accordingly. For example, an insurance company can overlay hazard zones on a
map to evaluate potential risks.
Resource Allocation: By identifying high-risk areas, organizations can better allocate resources and manage risks
effectively.
4. Comparative Analysis
Heat Maps: Geographic visualizations can use heat maps to represent data intensity, such as customer activity or
sales volume in different areas, helping businesses identify where to focus their efforts.
Data Cross-Referencing: Combining maps with other data displays, like charts or tables, allows for easy
comparison and deeper insights into different aspects of the data.
5. Informed Decision-Making
Data-Driven Choices: Geographic visualization supports decision-making by providing clear insights based on
spatial data, leading to better strategies and operations. For example, a retail company might use geographic
data to decide where to open new stores based on customer density.
Effective Communication: Maps and geographic visuals make complex data easier to understand and share with
others, helping stakeholders grasp important insights quickly.
Integrated Analytics
Integrated Analytics refers to the seamless incorporation of analytical results into operational activities, allowing users to
benefit from Business Intelligence (BI) tools without needing extensive training. Here are the key aspects and
characteristics of integrated analytics based on the provided content:
Characteristics of Integrated Analytics
1. Distinct Performance Objectives:
o Each business process has specific goals that analytical results aim to achieve.
2. Decision Points:
o There are critical points within the process where decisions must be made by individuals or teams.
3. Impact of Information Absence:
o Lack of timely information can hinder the performance of the business process.
4. Ill-informed Decisions:
o Poor decisions, often due to inadequate information, can impair the effectiveness of the process.
5. Improvement through Informed Decision-Making:
o The process can be enhanced by making decisions based on well-informed analytics.
6. User Accessibility:
o Participants do not need advanced technical skills (tech-savvy) to understand the information provided.
Implementation Considerations
For integrated analytics to be effective, certain conditions must be met:
Real-time Data Integration:
o Data from various sources (analytics and operational data) must be integrated in real-time to provide the
necessary insights.
Timely Delivery:
o Actionable knowledge must be delivered to the right person at the right time to facilitate effective
decision-making.
Seamless Presentation:
o The presentation of analytical results should align with everyday business operations and integrate
smoothly with commonly used productivity tools.
Event-Driven Notifications:
o Using alerts and notifications allows analytics to be embedded directly within operational processes,
enhancing responsiveness and actionability.
Benefits of Integrated Analytics
Reduced Training Needs:
o End-users can operate effectively without extensive training in BI tools.
Enhanced Decision-Making:
o Facilitates better-informed decisions, leading to improved business outcomes.
Increased Efficiency:
o Streamlines processes by embedding analytics directly into workflows.
Wider Adoption of BI Services:
o Lower barriers to deployment and integration can lead to broader acceptance and utilization of BI across
organizations.