XMLA
XMLA
XMLA endpoints provide access to the Analysis Services engine in the Power BI
service, therefore, the same Enterprise BI tools that connect to Analysis Services for
application lifecycle management, governance, complex semantic modeling,
debugging, and monitoring can be used with Power BI.
The XMLA endpoint for datasets in a Premium capacity provide the following benefits:
For incremental refresh Refresh operations are not limited to 48 refreshes per
day.
Power BI Premium datasets have more parity with Azure Analysis Services and
SQL Server Analysis Services enterprise grade tabular modeling tools and
processes.
For datasets with incremental refresh policy applied you must avoid publish and
replace them with a version of a PBIX file from Power BI Desktop, because you will
have to refresh all the historical data which could take hours and result in system
downtime for users.
Instead, it's better to perform a metadata-only deployment using the ALM Toolkit tool
which is an open-source schema compare tool for Power BI datasets.
This allows deployment of new objects without losing the historical data. For example,
if you have added a few measures, you can deploy only the new measures without
needing to refresh the data, saving a lot of time.
1. Select the running Power BI Desktop instance as the source, and the existing
dataset in the service as the target:
Both Source and Target you have the following options to connect to your datasets:
2. To get only the objects with differences, go to Home > Select Actions > Hide Skip
Objects with Same Definition:
In this case the difference is that the “Test Measure” is missed in the target:
You have the option to create it or select skip to don’t create it.
3. Finally click on “Validate Selection” to ensure the integrity of the target model and
then click on “Update”:
Refresh Power BI Premium datasets with SQL Server Management Studio
(SSMS)
With XMLA endpoint read-write enabled, you can use SSMS to:
- Refresh a specific historical partition without having to refresh all historical data.
Load historical data for very large datasets by incrementally adding and refreshing
historical partitions in parts