Most organizations today implement different data stores to support business operations. This is a direct result of how organizations and businesses evolve, naturally, over time. As a result data ends up stored across a multitude of often heterogenous systems with limited interaction and/or interoperability between them. How is an organization then able to exploit this wealth of information when critical data lives in silos across different departments? A key aspect of effective data analytics is being able to access all of this data efficiently and easily. One option is to off-load from these disparate systems into dedicated (Hadoop) Data Lakes. But the source systems cannot be necessarily decommissioned: they are still needed to support day-to-day operations. The end result is often then a vast eco-system of data stores with different "temperature" data, some level of duplication and, no effective way of bringing it all together for business analytics. The concept of a Logical Data Warehouse is that of a virtualization layer that allows for accessing all of this data from a single entry point. A user does need to be concerned with where the data resides or, what interface to use to reach it. This presentation will introduce you to the capabilities behind IBM Db2 Big SQL that support implementing a logical EDW running on Hadoop. We will cover MVP features, customer experiences and best practices.