“The demo was perfect. But after nine months, the system started crashing—every morning at 8:47.”
Every other customer of the system
There’s a pattern in corporate analytics: the dashboard shown during the initial presentation and the dashboard in a production environment are almost two different products. The first—as is to be expected—is a polished presentation designed to work with test data.
The second is a product that deals with real-world volumes, hundreds of concurrent requests, and frequent changes in business requirements. And it is this one that demonstrates whether the developer has succeeded.
Let’s talk about the difference in mindset when building a system and how to find a Power BI developer for hire with proven enterprise dashboard delivery experience so you won’t regret it later.
Two Different Developers, Two Different Approaches
If you asked ten random companies to describe their ideal Power BI developer, nine of them would list nearly identical qualifications: knowledge of DAX, the ability to create compelling visualizations, and experience connecting to SQL databases.
There’s no doubt they’re right. However, this is a description of a specialist capable of preparing and conducting a demo.
But there’s a huge gap between the report creator (let’s call them that) and the corporate data architect. In fact, this architectural mindset is exactly what separates a strong Power BI developer from one who just builds reports. The former thinks in terms of system layers. The latter thinks in terms of sleek dashboards.
A report creator can import data from Excel, apply transformations in Power Query, write DAX formulas, and publish the file. You can be sure that during the presentation, if they don’t exactly bring the house down, they’ll at least earn a friendly pat on the back. At the same time, their file will remain a monolithic structure in which data connections, business logic, and visualizations are all merged into a single .pbix file. And if the data volume triples in a year and the company wants to add a new department to the report, everything will fall apart.
How does a data architect think? They break the project down into its components. The semantic model stands on its own. Separately—the presentation layer. For data sources—certified datasets, not raw files. Dozens of different dashboards can draw from a single shared model. And it is updated centrally and always provides consistent figures.
The difference in thinking is obvious, isn’t it?
The problem is that both candidates’ resumes look the same.
Technical Debt, Invisible During the Demo
The concept of technical debt in Power BI is a constant. It is what lies hidden behind the attractive charts and becomes apparent when shortcomings reach a critical point. Analysis of Microsoft telemetry shows that 68% of performance issues in enterprise environments stem from the same limited set of errors. These are often just common Power BI developer shortcuts that cause performance problems months after delivery.
“Flat” Table
Someone who has worked with Excel for a long time thinks in terms of rows. One row equals one record. This is convenient and easy to visualize. It’s worse when a BI model developer uses this logic, because they dump everything into one big table: customer names, transaction data, SKUs, dates, sales regions, and much more.
The problem is that Power BI’s analytics engine—VertiPaq—works in a fundamentally different way. It compresses data by column, and the fewer unique values in a column, the better. When a customer’s name appears multiple times across a million transaction rows, the compression breaks down. The model bloats in memory, and reports start to slow down.
That is why architects use the classic “Star” schema: narrow fact tables with numerical values and keys, and wide measure tables with text descriptions. No duplication, maximum compression, and instant filtering even on datasets containing billions of records.
Calculated Columns Instead of Measures
Let’s look at a typical example. The developer’s goal is to calculate revenue. He takes the “Price” column and multiplies it by the “Quantity” column, and to store the result, he creates a new column in a fact table containing 50 million rows. At first glance, everything seems to be working.
But this column is loaded into RAM every time. It takes up gigabytes of space and is updated with every refresh. And it offers no advantage over a simple measure that would calculate the same result “on the fly” as the user interacts with the report.
Measures are the right choice for any aggregation. They take up no space in the model at rest and only use CPU resources during rendering. Calculated columns are needed exclusively for static categorical attributes that are never altered by user filters.
Bidirectional Filtering
There’s a classic scenario in a junior developer’s career: they encounter a complex query, the business logic isn’t filtering correctly, and they find a “solution”—they simply switch the relationship between the tables to “both directions.” The report immediately starts showing the correct numbers. The problem seems to be solved.
But this works only until the model grows to include a couple of dozen tables.
Two-way filtering forces the engine to calculate, for every query, which paths to use to propagate filters through the table network. Multiple possible paths = ambiguity = enormous CPU overhead. Queries that used to take milliseconds start taking seconds. And then—even longer.
The architect’s rule is quite different: always use unidirectional filtering; use bidirectional filtering only in specific scenarios and exclusively via the CROSSFILTER function within a specific measure.
When There’s a Lot of Data
The company’s product is scalable, and the transaction database already contains 600 million rows. The report creator has two options: load everything into memory (Import) or connect directly to the database (DirectQuery). Both approaches have critical limitations:
- An import of this size could take up to 8 hours and use up all the allocated memory.
- DirectQuery shifts the burden to the database—and if the database lacks the right indexes, every user interaction generates a complex SQL query that takes minutes to execute.
The architect chooses a third approach: Composite Models with Aggregations. The idea seems quite elegant: pre-aggregated totals (such as sales data by region and day) are loaded into memory and provide an instant response to 90% of queries.
At the same time, the details of individual transactions remain in the database and are retrieved by DirectQuery only when the user actually wants to delve into the details of a specific receipt.
How to Vet a Developer Before They Cause Harm
A technical interview with a candidate is important. You can and should ask about the difference between SUM and SUMX, ask them to explain context transitions in CALCULATE, or ask what Type 2 Slowly Changing Dimensions (SCD) are and where they are best implemented.
But there are also simpler, more informal cues. Imagine the situation and voice the problem: “The report is stalling,”
Green flag:
The developer immediately suggests opening DAX Studio to view the model’s memory profile.
Red flag:
The developer is wondering if it’s possible to increase the server’s RAM.
Another clue is the first question asked at the start of the project.
A novice developer asks, “What kinds of visuals do you need?”
Strong: “What management decisions do you need to make based on this data?”
What Remains After the Project is Completed
A sign of a developer’s maturity isn’t just what they build, but what they leave behind.
If, upon contract completion, you receive a set of files containing tables like Table1, Measure_Final_V3, Shape 1, and nothing else—you’re dealing with someone who solved their problem and moved on. Reverse-engineering such a legacy system takes months of work.
A seasoned professional delivers a project as a documented infrastructure:
- The semantic model has been validated using the Best Practice Analyzer in Tabular Editor.
- All deviations from the standards are documented along with an explanation of the decision.
- Every table and every measure has a description in the data dictionary.
- There are SQL queries that confirm that the figures in Power BI match those in the source accounting system.
- The code is integrated into a Git repository with rollback capabilities.
- There are training materials for end users.
Where to Find Such Architects
The shortage of skilled data architects is a global problem. In this situation, companies are increasingly opting to partner with teams that can handle the entire architecture and implementation on a turnkey basis. This is a collaboration model similar to an R&D center: where an external team takes on the construction of the entire analytics system—from architectural design to implementation and support.
This means that the business gets a team capable of:
- Design a data model
- Build infrastructure
- Integrate all sources
- Bring the system to a point where it can be used for decision-making.
For example, Cobit Solutions operates precisely according to this approach, acting as a Power BI developer for hire trusted by large organizations for long term engagements. The company specializes in building BI systems, data warehouses, and data integration, implementing full-cycle projects.
In conclusion
A pretty dashboard in a demo doesn’t mean anything. The real test begins six to twelve months after release: when there is more data and more users, and business requirements have already changed three times. That is when every line of DAX code, every step in Power Query, and every architectural decision made in the first weeks of development is put to the test.
Companies that understand this difference and prioritize hiring a Power BI developer who understands data modeling not just visualization gain a reliable tool for making decisions worth millions.
Everyone else gets another dashboard that freezes every morning at 8:47.

