Quantcast
Channel: Data Platform
Viewing all 808 articles
Browse latest View live

SQL Server 2008 R2 SP2 Cumulative update #12

$
0
0
Dear Customers, The 12 th cumulative update release for SQL Server 2008 R2 SP2 is now available for download at the Microsoft Support site. Cumulative Update 12 contains all the SQL Server 2008 R2 SP2 hotfixes which have been available since the initial...(read more)

Cumulative Update #1 for SQL Server 2014 RTM

$
0
0
Dear Customers, The 1 st cumulative update release for SQL Server 2014 RTM is now available for download at the Microsoft Support site. Cumulative Update 1 includes all hotfixes which were released in SQL Server 2012 SP1 CU 6, 7, 8, and 9. To learn...(read more)

SQL Server chez les clients – BI & Agilité

$
0
0

L’adaptation au changement constitue aujourd’hui un facteur de compétivité décisif dans un environnement économique en perpétuelle évolution. En effet, dans le monde du développement logiciel, la popularité des méthodes « Agiles » va croissant. Ces approches partagent les mêmes objectifs : améliorer la visibilité au sein de l’équipe de projet et l’interactivité entre les membres de l’équipe tout en augmentant le flux de valeur pour le client.

...(read more)

Progressive Insurance data performance grows by factor of four, fueling business growth online experience

$
0
0

At the Accelerate your Insights event last week, Quentin Clark described how SQL Server 2014 was now part of a platform that had in-built in-memory technology across all data workloads.  In particular with this release Microsoft has added in-memory Online Transaction Processing delivering breakthrough performance for applications in throughput and latency.

One of the early adopter customers of this technology is Progressive Insurance, a company that has long made customer service a competitive strength.  Central to customer service experience is the company’s policy-serving web app.  As it updated the app, Progressive planned to add its Special Lines business such as insuring motorcycles, recreational vehicles, boats, and even Segway electric scooters. However, Progressive needed to know that the additional workloads wouldn’t put a damper on the customer experience.

Progressive was interested in In-Memory OLTP capability, which can host online transaction processing (OLTP) tables and databases in a server’s working memory. The company tested In-Memory OLTP even before SQL Server 2014 became commercially available. Modifying the policy-serving app for the test was relatively straightforward, according to Craig Lanford, IT Manager at Progressive. 

The company modified eight natively compiled stored procedures, using already-documented code. In those tests, In-Memory OLTP boosted the processing rate from 5,000 transactions per second to 21,000—a 320 percent increase.

Lanford and his colleagues were delighted that the session-state database performance proved four times as fast with SQL Server 2014, adding “Our IT leadership team gave us the numbers we had to meet to support the increased database workload, and we far exceeded those numbers using Microsoft In-Memory OLTP”.  The company will use the throughput gain to support the addition of its Special Lines business to its policy-servicing app and session-state database. With its use of SQL Server 2014, Progressive can run a single, larger database reliably and avoid the cost of multiple databases.

You can read more about how Progressive is using SQL Server 2014 here.

Whether you’ve already built a data culture in your organization, or if you’re new to exploring how you can turn insights into action, try the latest enhancements to these various technologies: SQL Server 2014, Power BI for Office 365, Microsoft Azure HDInsight, and the Microsoft Analytics Platform System

Introducing the Microsoft Analytics Platform System – the turnkey appliance for big data analytics

$
0
0

At the Accelerate your Insights event last week, Satya Nadella introduced the new Microsoft Analytics Platform System (APS) as Microsoft’s solution for delivering “Big Data in a box.” APS is an evolution of our SQL Server Parallel Data Warehouse (PDW) appliance which builds upon the high performance and scale capabilities of that MPP version of SQL Server, and now introduces a dedicated region to the appliance for Hadoop in addition to the SQL Server PDW capabilities. The Hadoop region within the appliance is based on the Hortonworks Data Platform for Windows but adds key capabilities enterprises expect for a Tier 1 appliance such as high availability through the appliance design and Windows Server failover clustering, security through Active Directory and a unified appliance management experience through Systems Center. Completing the APS package and seamlessly unifying the data in SQL Server PDW with data in Hadoop is PolyBase, a ground breaking query technology developed by Dr. David DeWitt and his team in Microsoft’s Grey Systems Labs.

Microsoft continues to work with industry leading hardware partners Dell, HP and Quanta to deliver APS as a turnkey appliance that also delivers the best value in the industry for a data warehouse appliance.

Go to the APS product site to learn more or watch the short product introduction video here: 

SQL Server 2014 and the DBA: Building Bridges

$
0
0

 Guest blog post by: SQL Server MVP Denise McInerney – Vice President of Marketing for PASS and a Data Engineer at Intuit – began her career as a SQL Server DBA in 1998 and now applies her deep understanding of data to enable analytic solutions to business problems. She is founder of the PASS Women in Technology virtual chapter, a speaker at user group meetings and conferences, and blogs at select*from denisemc.views. You can follow her on Twitter at @denisemc06.

*     *     *     *     *

The iconic Golden Gate Bridge was a great image for promoting last week’s live webcast celebrating the launch of SQL Server 2014. Of course, it represents San Francisco, where Microsoft CEO Satya Nadella, COO Kevin Turner, and Data Platform Group CVP Quentin Clark took the stage to highlight the new release’s features. But I think it’s also a metaphor for what the new capabilities foretell for SQL Server DBAs.

From the much-anticipated In-Memory OLTP engine (formerly code-named Hekaton) – with its promise of dramatically reducing I/O traffic jams and speeding application performance – to the new high-performance updateable columnstore index and enhanced scalability through improvedWindows Server 2012 integration, much of the SQL Server 2014 message is about helping us build better, stronger, faster data-processing bridges.

But with the release’s integration of AlwaysOn Availability Groups with Windows Azure, smart SQL Server backup to a Windows Azure URL, and integration with the new Power BI for Office 365 cloud solution, we’re also talking about bridging our database capabilities – and technical skills – to the cloud and further into the data analytics world.

What does all this mean for us as SQL Server professionals? More than ever, our organizations need us to be an essential part of the team: bridging IT and business, better connecting data and the people who use it to make decisions, and adding value by building strong and flexible solutions custom-fit for our company’s needs.

As we prepare for this changing world of data, here are three areas to focus on:

  • Cloud and hybrid environments: On-premises SQL Server vs. the cloud isn’t the question anymore.  More and more, we’ll have data residing in both worlds. In Hybrid IT environments, we’ll play an important role in application architecture and design. We’ll also need to support on-premises and cloud-based performance tuning and monitoring, implement high availability and disaster recovery solutions, and more. The mission is still effective data management – wherever the data lives.
  • Relational and big data:As we gather and store increasingly more data and different types of data – including structured, unstructured, and streaming – we’ll be looking at another hybrid environment. This one will include SQL Server relational stores integrated with big data solutions such as Hadoop for storing and processing large data sets.
  • Data and business value: The purpose of collecting all this data is to use it to improve our products and services and better understand and serve customers. As data professionals, we need to be the champions of thinking end-to-end about how data can transform business – what Satya Nadella calls creating “a data culture.” It involves bringing business intelligence and analytics to everyone in our organizations, helping them understand the data they have, ask questions of it, and gain insights.

Change brings challenges but also the opportunity to learn and grow our careers. I encourage you to take advantage of the free resources available through PASS Virtual Chapters, your local user group, and PASS SQLSaturday events to learn how SQL Server 2014 and Microsoft’s data platform can help us get the most from our organizations’ data.

As SQL Server professionals, harnessing the power of data to solve business problems is really the heart of our job. We’re still guardians of data – but now we also need to be advocates for what data can do in our businesses.

Let’s go build some bridges.
– Denise

Pie in the Sky (April 25th, 2014)

$
0
0

It's supposed to be a nice weekend, so I may have to skip reading and spend some time outside. For those of you stuck inside (or just looking for something to read,) here are some interesting links from this week.

Cloud

Client/mobile

Node.js

Misc.

Enjoy

- Larry

ICYMI: Data platform momentum

$
0
0

The last couple months have seen the addition of several new products that extend Microsoft’s data platform offerings.  

At the end of January, Quentin Clark outlined his vision for the complete data platform, exploring the various inputs that are driving new application patterns, new considerations for handling data of all shapes and sizes, and ultimately changing the way we can reveal business insights from data.

 In February, we announced the general availability of Power BI for Office 365, and you heard from Kamal Hathi about how this exciting release simplifies business intelligence and how features like Power BI sites and Power BI Q&A, Power BI helps anyone, not just experts, gain value from their data. You also heard from Quentin Clark about how Power BI helps make big data work for everyone by bringing together easy access to data, robust tools that everyone can use, and a complete data platform.

In March, we announced that SQL Server 2014 would be general available beginning April 1, and shared how companies are already taking advantage of in-memory capabilities and hybrid cloud scenarios that SQL Server enables. Shawn Bice explored the platform continuum, and how with this latest release, developers can continue to use SQL Server on-premises while also dipping their toes into the possibilities with the cloud using Microsoft Azure. Additionally, Microsoft Azure HDInsight was made generally available to support Hadoop 2.2, making it easy to deploy Hadoop in the cloud.

 And earlier this month at the Accelerate your insights event in San Francisco, CEO Satya Nadella discussed Microsoft’s drive towards a data culture. In addition, we announced two other key capabilities to extend the robustness of our data platform: the Analytics Platform System, an evolution of the Parallel Data Warehouse with the addition of a Hadoop region for your unstructured data, and then a preview of the Microsoft Azure Intelligent Systems Service to help tap into the Internet of Your Things. In case you missed it, watch the keynotes on-demand, and don’t miss out on experiencing the Infinity Room, to inspire you with the extraordinary things that can be found in your data.

On top of our own announcements, we’ve been recently honored to be recognized by Gartner as a Leader in the 2014 Magic Quadrants for Data Warehouse Database Management Systems and Business Intelligence and Analytics Platforms. And SQL Server 2014, in partnership with Hewlett Packard, set two world records for data warehousing performance and price/performance.

With these enhancements across the entire Microsoft data platform, there is no better time than now to dig in. Learn more about our data platform offerings. Brush up on your technical skills for free on the Microsoft Virtual Academy. Connect with other SQL Server experts through the PASS community. Hear from Microsoft’s engineering leaders about Microsoft’s approach to developing the latest offerings. Read about the architecture of data-intensive applications in the cloud computing world from Mark Souza, which one commenter noted was a “great example for the future of application design/architecture in the Cloud and proof that the toolbox of the future for Application and Database Developers/DBAs is going to be bigger than the On-Prem one of the past.” And finally, come chat in-person – we’ll be hanging out at the upcoming PASS Business Analytics and TechEd events and are eager to hear more about your data opportunities, challenges, and of course, successes.

What can your data do for you?


The Microsoft Infinity Room Photo Contest Has a Winner!

$
0
0

Congratulations to Edgar Rivera, whose Microsoft Infinity Room photo won the #InsightsAwait Photo Sweepstakes. “I never thought that stepping into some data visualization could be this cool,” Edgar tweeted. You can see his photo here.

Visitors to the Microsoft Infinity Room were invited to capture their experiences and tag their photos on Twitter or Instagram with the #InsightsAwait hashtag. You can view all of the contest entries here.

If you didn’t have a chance to visit the Infinity Room in San Francisco from April 15-17, take the 360-degree virtual tour and be inspired by the extraordinary found through data surrounding an ordinary object.

Also - want to learn more about Microsoft Big Data solutions? Hear CEO Satya Nadella discuss Microsoft’s drive towards a data culture during the Accelerate your insights event in San Francisco earlier this month. Watch the keynote on-demand now.

How to Use Open Type in OData

$
0
0

OData protocol introduces conception of open type which allows clients to add properties dynamically to instances of the type by specifying uniquely named values in the payload used to insert or update an instance of the type. This makes definition of entity type or complex type more flexible. Developers do not have to define everything which is probably used in the edm model.

Server Side

1. Model definition

It is quite easy to define an open entity type or open complex type in server side.

If a server uses CSDL file to define the data model, only adding an attribute OpenType="true" to the node of the entity type or the complex type definition is needed. For example,

Open entity type:

Open complex type:

If a server uses EdmModel to define a data model, the parameter isOpen should be true when construct an entity type or complex type as open. For example,

Open entity type:

Open complex type:

According to the OData protocol, the default values of OpenType and isOpen are false.

2. Uri Parse

When query a dynamic property (A property, in open type, not declared in the edm model is called “dynamic property” because it is added by clients dynamically), the query uri can be something like: "~/Categories(0)/DynamicPropertyName" or "~/AccountInfo/DynamicPropertyName". If service calls ODataUriParser to parse the uri, the last segment of the uri should be OpenPropertySegment.

Client Side

At client side, developers can use OData client code generator to generate OData client code. For more details, please refer to How to Use OData Client Code Generator. In the generated codes, a class which represents an entity type and complex type is defined as a partial class. Developers can add properties into the specific class as dynamic properties. The added code is similar as those generated code for a declared property.

For example, declare a dynamic property with property name as “Description” in Category which is an open entity type. If interested, people can have try against the OData (read/write) service.

It is in same way to declare a dynamic property in an open complex type.

Then the client will treat them as any other properties. So users can use the dynamic property in the same way as a declared property.

Sample Code:

ODataLib 6.3.0 Release

$
0
0

We are happy to announce that the ODL 6.3.0 is released and available on nuget along with the source code oncodeplex (please read the git history for the v6.3.0 code info and allprevious version). Detailed release notes are listed below.

Bug Fix

  • Fix a bug for serializing floating-point values to NaN, INF, and -INF.

New Features

  • EdmLib & ODataLib now supports model reference

  ODataLib now supports referencing external CSDL document by using the edmx:Reference element, or specifies the schemas and annotations to include from the target document using the edmx:Include and edmx:IncludeAnnotations element, The scope of a CSDL document is now the document itself and all schemas included from directly referenced documents

 

  • ODataLib now supports reading and writing delta response

      ODataLib now supports the client to request the service track changes to a result by specifying the odata.track-changes preference on a request. If supported for the request, the service includes a Preference-Applied header in the response containing the odata.track-changes preference and includes a delta link on the last page of results. The client is able to request changes by invoking the GET method on the delta link. For details, see 11.3 Requesting Changes of the OData Version 4.0 protocol.

  • EdmxReader supports ignoring the unknown attribute/element in new TryParse API

ODataLib Parser now supports emit a warning upon unknown attributes/elements in the metadata document instead of throwing an exception.

 

  • ODataUriParser now supports parsing Enum and ComplexType in Cast

ODataLib Parser now supports implicitly casting derived complex type to base type and underlying enum values to enum type in filter clause.

 

  • ODataUriParser adds support for parsing relative Uri without service root

ODataUriParser now does not enforce a service root URI in construction. This is handy when you are trying to parse a query option with only the model and the  relative uri.

 

Misc

  • ATOM is disabled by default in OData reader & writer

Odatalib now suppress the reader/writer of ATOM payload by obsoleting these methods for the reason that  ATOM payload is still is still a committee drafted standard as of now. We will support ATOM format once it become an official OASIS standard. Using these method will result in build warning with this release.

 

Perf Improvement

 

  • Improve the writer performance by cache metadata document URI. In a pure Action/Function scenarios the boost can be as high as 45%.

 

Change the Game with APS and PolyBase

$
0
0

 Guest blog post by: James Rowland-Jones (JRJ), SQL Server MVP, PASS Board Director, SQLBits organiser and owner of The Big Bang Data Company (@BigBangDataCo). James specializes in Microsoft Analytics Platform System and delivers scale-out solutions that are both simple and elegant in their design. He is passionate about the community, sitting on both the PASS Board of Directors and the SQLBits organising committee. He recently co-authored a book on Microsoft Big Data Solutions and also authored the APS training course for Microsoft. You can find him on LinkedIn (JRJ) and Twitter (@jrowlandjones).

*     *     *     *     *

On April 15, 2014 Microsoft announced the next evolution of their Modern Data Warehouse strategy; launching the Analytics Platform System (APS). APS is an important step for many reasons. However, to me, the most important of those reasons is that it helps businesses complete the jigsaw on business data. In this blog post I am going to define what I mean by business data and explain how PolyBase has evolved; providing the bridge between heterogeneous data sources. In short, we are going to put the “Poly” in PolyBase.

Business Data

Business data comes in a variety of forms and exists in a diverse set of data sources. Those forms are sometimes described using terms such as relational, non-relational, structured, semi-structured or even un-structured. However, whatever term you choose to use doesn’t really matter. What matters is that the business has generated it and its employees (a.k.a. the users) need to be able to access said data, integrate it and draw data insights from it. This data is often disparate; spread liberally across the enterprise.

These users don’t see themselves as technical (although many are) and are often frustrated by the barriers created by having disparate data in a variety of forms. Having to write separate queries for different sources is difficult, time-consuming and raises many data quality challenges. I am sure you have seen this many times before. However, in the world of analytics the latency introduced by this kind of data integration is the real killer. By the time the data integration barrier has been solved the value of the insight has diminished. Consequently, business users need to have frictionless access to all of the data, all of the time.

In the modern world, there is only data, questions and a desire for answers. To enhance adoption we also need *something* that delivers using simple, familiar tools leveraging commodity technology and offering both high performance and low latency.

That *something* is PolyBase – underpinned by APS.

PolyBase

What is PolyBase, how does it work, and why is it such an important, innovative technology?

Put simply - it’s the bridge to your business data.

Why is it important? It is unique, innovative technology and it is available today in APS.

PolyBase was created by the team at the Jim Gray Systems Lab, led by Dr David DeWitt. Dr DeWitt is a technical fellow at Microsoft (i.e. he is important) and he’s also been a PASS Summit key-note speaker for several years. If you’ve never seen any of his presentations then you should absolutely address that. They are all free to watch and are available now; including a great session on PolyBase.

As I mentioned a moment ago, PolyBase is a bridge but it’s not just any old bridge. It is a fully parallelised super-highway for data. It’s like having your own fibre-optic bridge when everyone else has a copper ADSL bridge. It offers fast, run-time integration across relational data stored in APS and non-relational data stored in both Hadoop and Microsoft Azure Storage Blobs. 

Notice I didn’t just say the new Hadoop Region in APS – I just said Hadoop. That’s because PolyBase is different. It is agnostic, not proprietary, in its approach and in its architecture. PolyBase integrates with Hadoop clusters that reside outside the appliance just as it does with the new Hadoop Region that exists inside the appliance. This agnostic approach is also evident in its Hadoop distribution support; covering both Hortonworks (HDP) on both Windows and Linux and Cloudera (CDH) on Linux.

To achieve this unparalleled level of agnosticism, PolyBase uses a well-established enterprise pattern of employing “external tables” to provide the metadata of the external data. However, PolyBase takes this concept further by de-coupling the format and the data source from the definition of the external table.

This enables PolyBase to access data in a variety of sources and data formats, including RCFiles and Microsoft Azure Storage Blobs using wasb[s]. This is a key step. This process lays the foundation for other data sources to be plugged into the PolyBase architecture; putting the “Poly” in PolyBase.

Building Bridges

PolyBase builds the bridges to where the data is. Once the bridge has been defined (a simple case of a few DDL commands), PolyBase enables users to simply write queries using T-SQL. These queries can be against data in APS, Hadoop and/or Azure all at the same time. How amazing is that? I call this dynamic hybrid query execution. You can do some really clever things using hybrid queries. For example, you can read data from Hadoop, transform and enrich it in APS and persist the data back in Hadoop or Azure. That’s called round-tripping the data and that is just a taster of what is possible with hybrid query support.

There is more.

PolyBase can also leverage the computational resources available at the data source. In other words it can selectively issue MapReduce jobs against a Hadoop cluster. This is called split query execution. Like a true data surgeon, PolyBase is able to dissect a query into push-able and non-pushable expressions. The push-able ones are considered for submission as MapReduce jobs and the non-push-able parts are processed by APS.

It gets better.

The decision to push an expression is made on cost by the APS distributed query engine: Cost based split query execution against APS, Hadoop and Azure. Fantastic.

To achieve this feat PolyBase is able to hold detailed statistical information in the form of table and column level statistics. This level of knowledge about the data is lacking in Hadoop today. By having a mechanism for generating statistics APS and PolyBase can selectively assess when it is appropriate to use MapReduce and when it would be more cost-effective to simply import the data. 

The results can be dramatic. Even with “small” data you can see huge data volume reduction through the MapReduce split query process and significant delegation of computation to low-cost Hadoop clusters; providing maximum efficiency and business value. Plus if you are using the APS Hadoop Region you can also draw comfort from the ultra-low latency Infiniband connection between the two regions – leading to unparalleled data transfer speeds. This offers a completely new paradigm to the world of Hadoop.

Simple, Familiar Tools

Did I mention that all this is possible with just T-SQL? Literally there is nothing to really “learn” in order to be able to write PolyBase queries. If you can write T-SQL then you can query any PolyBase-enabled data source.

That is really important.

Think about how many users know T-SQL. Having a technology that is SQL-based is massive for adoption. Many projects have failed in the adoption phase only to wither on the vine. Imagine how many of your users would be able to simply access all of their data, gaining new insights, using nothing but their existing T-SQL skills thanks to PolyBase.

PolyBase changes the game and is available now in APS.

Azure SQL Database: New Service Tiers Q&A

$
0
0

Earlier this month, we celebrated the launch of Microsoft SQL Server 2014, announced that the Analytics Platform System is generally available, and shared a preview of the Intelligent Systems Service. Quentin Clark summarized his keynote speech at the Accelerate Your Insights event in a blog post entitled, “The data platform for a new era.” If you haven’t read that post, I encourage you to take a few minutes to read it.

In a previous post, I described the modern data platform as having a “continuum of capabilities [that] enables developers to continue to use SQL Server on-premises, to easily virtualize and move database workloads into Azure, and to attach Azure services and build new cloud applications all from one data platform.” So, along with the news mentioned above, we are also continuing to evolve the Microsoft Azure SQL Database service. Just a few days ago, Eron Kelly shared the news that we are introducing new service tiers to Azure SQL Database. And, in a recent Channel 9 video, Scott Klein was joined by Tony Petrossian and Tobias Ternstrom (both work as program managers for SQL Database) to discuss the new service tiers.

With all this going on, we created a document with anticipated questions & answers to help people on the team address common questions about the new Azure SQL Database service tiers. The document was written as an internal brief, but frankly, I think everything here is just as useful for you. 

Enjoy.

Shawn Bice
Director of Program Management, Data Platform Group

 

What are the new service tiers?

In the Microsoft Azure business, we refer to customer options within a particular service as ‘service tiers.’ In the on-premises software business, we traditionally called these editions. Based on this, Microsoft Azure SQL Database will have 3 service tiers in preview: Basic, Standard and Premium. The new service tiers are:

  • Basic: Designed for applications with a light transactional workload and continuity needs. Performance objectives for Basic provide a predictable hourly transaction rate. The max size database in Basic is 2 GB.
  • Standard:Standard is the go-to option for getting started with cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates. The max size database in Standard is 250 GB.
  • Premium: Designed for mission-critical databases, Premium offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates. The max size database in Premium is 500 GB.

What can customers expect from each service tier?

 BasicStandardPremium
Uptime SLA99.95%*
Database Size Limit2 GB250 GB500 GB
RestoreLatest restore point within 24 hoursAny point within 7 daysAny point within 35 days
Disaster Recovery (DR)Restore to alternate Azure region**Geo-Replication, passive replica**Active Geo-Replication, up to 4 readable replicas
Performance ObjectivesTransaction rate per hourTransaction rate per minuteTransaction rate per second
Preview Cost$0.08/day
(~$2.50/month)
S1: $0.65/day (~$20/month)
 

S2: $3.23/day
(~$100/month)
P1: $15.00/day
 (~$465/month)

P2: $30.00/day
(~$930/month)

P3: $120.00/day
 (~$3,720/month)
GA Cost$0.16/day
(~$4.99/month)
S1: $1.29/day
(~$40/month)

S2: $6.45/day
(~200/month)
P1: $30.00/day
(~$930/month)

P2: $60.00/day
(~$1860/month)

P3: $240.00/day
(~$7440/month)

*SLAs will take effect at time of GA, Azure previews are subject to different service terms, as set forth in preview supplemental terms.

**Not all disaster recovery features are available today, visit the disaster recovery documentation page to learn more.

What are performance levels?

The new service tiers introduce the concept of performance levels. There are six performance levels across Basic, Standard and Premium. The performance levels are Basic, S1, S2, P1, P2, and P3. Each performance level will deliver a set of resources required to run light-weight to heavy-weight database workloads. We’ll provide more details on performance levels in a follow-on blog post.

How does a customer provision a Basic, Standard or, Premium Database?

Premium databases can be created on any server. Web and Business databases can also be upgraded to a Premium database on the database Scale tab. Premium databases are limited by a quota of 2 per server. If you need additional quota, contact customer support.

Initially, servers that have Web and Business databases will not support Basic and Standard databases.  To create a Basic or Standard database, you first create a new server that supports Basic, Standard and Premium tiers; then, you create the database with the tier and performance level needed. Once the Basic or Standard database has been created, you can freely upgrade or downgrade on the database Scale tab.

Initially, customers cannot upgrade a Web or Business database to Basic or Standard. However, customers can export a Web or Business database, and then import the resulting BACPAC file into a newly created Basic or Standard database using the database import Powershell cmdlet. This limitation will be removed during the course of the previews, enabling customers to freely mix Web, Business, Basic, Standard and Premium databases on the same server, and enabling upgrade and downgrade between any editions.

How does a customer change the performance level of a Standard or Premium database?

You set the performance level using the database scale tab in Azure Management Portal or via APIs.

  

How long does it take to change the service tier or performance level of a database?

Changing the performance level of a database may require data movement in order to provide sufficient resources. This may happen when changing to or from Standard or Premium, or when changing the performance level of a Standard or Premium database. If this happens, it may take a few minutes and up to several hours, depending on the size of the database. The database will remain available to the customer, and operations will be transparent during the change. Of course, changing the service tier or performance level of a database immediately after creating it will be faster than upgrading a database after it is populated with data. For example, in some tests, an empty database took about 15 minutes to change, a 1 GB database took roughly 35 minutes, and a 10 GB database took between 3 to 4 hours.  In general, downgrading the service tier or performance level within Standard or Premium will always be very quick. For more information on the latency when changing performance levels see this topic.

Which service tier is used when a customer copies or restores a database?

Copying and restoring a database creates a new database in the same service tier as the original database. If copying a database via the portal (new) or using the T-SQL CREATE DATABASE … AS A COPY OF statement, the new database will have the same performance level as the original. When restoring a database, it will have the service tier applied at the point in time from which the database was restored and the default performance level, which is S1 for Standard tier and P1 for Premium tier. Customers can choose to downgrade a database after copying or restoring, if its size permits, but you will be charged for at least one day at the initial rate. Note that this is a change in behavior for Premium databases. Previously as premium database quota was limited, T-SQL copy and restore created a Suspended Premium database without reserved resources, which was charged at the same rate as a Business database. Suspended Premium databases are no longer supported. Existing Suspended Premium databases will be converted to Business edition as part of the April 24 release.

How often can a customer change the edition or performance level of a database?

Changing the edition or performance level of a database should be done as a considered and deliberate action. Customers are allowed up to 4 changes in a 24 hour period that alter the service tier or performance level of a database. Be mindful that you are still billed based on the highest database day rate for that day regardless of downgrades. Changes between Web and Business are excluded from this limit.

How does the billing approach within the new service tiers improve a customer’s bill?

With Basic, Standard and Premium, you are billed based on a predictable daily rate which you choose. Additionally, performance levels (eg. Basic, S1, and P2) are broken out in the bill to make it easier to see the number of database days you incurred in a single month for each performance level.

What pricing (or cost) benefits are realized using the new service tiers?

Based on early conversations with customers, we have found these common scenarios where the new service tiers remove costly workarounds and streamline the overall experience:

Backups workaround via import/export

  • Scenario: Customer uses DB Copy & export to create database copies as backups which incurs additional database cost.
  • Solution: Restore removes the need for the customer to carry the extra DB cost which can cut their database count by up to 50%, leaving headroom to dial-up performance.

Disaster Recovery via Data Sync

  • Scenario: Customer uses Azure DataSync (in preview) to create geo-replicated databases which incurs additional database cost and doesn’t assure transactional consistency after failover.
  • Solution: Geo-Replication in Standard is built-in and will discount the passive, secondary database by 25% which can save money on the total bill and assures transactional consistency.

Larger databases for less money

  • Scenario: Today, customers pay $45 and $225 for 10GB and 150GB databases, respectively.
  • Solution: With Standard S1 costing $40 a month and Standard S2 costing $200 a month, customers gain access to 250GB databases at a flat rate of $40 and $200 with greater performance assurance and business continuity.

When does the billing rate change as a customer changes the service tier or performance level of a database?

All databases are charged on a daily basis based on the highest service tier and performance level that applied during the day. When changing service tier or performance levels, the new rate applies once the change has completed. For example, if you upgrade a database to Premium at 10:00 pm, and the upgrade completes at 1:00 am on the following day, you will only be charged the Premium rate on the day it completes. If you downgrade a database from Premium at 11:00 am, and it completes at 5:00 pm the same day, the database will be charged at the Premium rate throughout that day and will be charged at the downgraded rate beginning the following day.

What if a customer’s database is active for less than a day?

The minimum granularity of billing is one day. Customers are billed the flat rate for each day the database exists, regardless of usage or if the database is active for less than a day. For example, if you create a database and delete it five minutes later, the bill will reflect a charge for one database day for that database. If a database is deleted and then another one is created with the same name, the bill will reflect a charge for two separate databases on that day.

If the new service tiers are not priced based on the database size, why is Max Size still supported as a property?

While the new service tier prices are based on their performance level, the size of the database is still significant. Some customer scenarios are size-sensitive and require set size limits. For example, some CSVs may place size limits on their customers’ databases.

In addition, while each service tier has a maximum possible size (eg. Standard supports up to 250 GB), customers should be aware that for certain workloads, there will be a correlation between the size of the database and the throughput achieved at any given performance level. This will be noticed particularly with operations that act on the entire database, such as import, export, or copy. Customers should not assume that because a service tier allows a specific max size that their workloads will necessarily perform satisfactorily at that size. Customers should evaluate the effect of database size on the performance of a database and may need to upgrade to a higher performance level as the database grows before reaching size limits of a service tier.

What is the Service Level Agreement (SLA) for the Basic, Standard, and Premium databases?

Microsoft does not provide any SLA for SQL Database Basic, Standard, or Premium during preview. At the time of general availability (GA), Basic, Standard, and Premium will have a 99.95% SLA.

When will Basic, Standard and Premium become Generally Available (GA)?

Microsoft has not disclosed the General Availability date for Basic, Standard, and Premium service tiers. Customers in the previews will receive notice via email at least 30 days prior to GA pricing taking effect.

How will customers engage support for these new offers during the preview?

All customers participating in the preview will have access to an MSDN public forum. Furthermore, we are introducing a policy that Azure SQL Database public previews will receive GA-level CSS support. Customers with Microsoft Azure paid support and/or Premium Support hours can access Customer Support for questions and incidents relating to SQL Database Basic, Standard, or Premium databases.

Where can I learn more?

Official Azure blog

SQL Database pricing page

Choosing an Azure SQL Database Edition

Manage Azure SQL Database Editions

SQL Server chez les clients – Solution EIM pour Dynamics CRM

$
0
0

Disposer de données exploitables est critique dans les processus d’une entreprise.

Nous allons voir comment grâce à la solution d’ « Enterprise Information Management » (EIM) apportée par les briques SQL Server, il est possible de mettre à disposition du métier des données clients de qualité au sein de Dynamics CRM.

...(read more)

Forrester Consulting study finds cost, business continuity benefits from cloud backup and disaster recovery

$
0
0

Maybe you have read about cloud database backup and disaster recovery (DR) but wanted to know more about the results achieved by real companies. Would you be surprised to find out that enterprises using the cloud for DR reported better success at meeting their service level agreements (SLAs)?  They do. And that businesses using the cloud for database backup achieved reduced storage costs and the ability to back up more frequently? They have. In December 2013, Microsoft commissioned Forrester Consulting to identify database backup and DR challenges for mid-to-large enterprises, and to find out how these companies are taking advantage of public cloud to tackle the challenges. You can read the full study here.

With SQL Server 2014, Microsoft introduced and enhanced a number of ways to provide better business continuity in the cloud, including easy backup to Microsoft Azure directly from SQL Server Management Studio (SSMS) and a free tool that enables backup to Azure for older versions of SQL Server.  SQL Server 2014 also introduced cloud disaster recovery, the ability to deploy an asynchronous replica to Azure for fast failover.  You can learn more about how customers are benefitting with these stories from Lufthansa Systems and Amway.   

As a preview, here are a few highlights from Forrester’s in-depth survey with 209 database backup and operations professionals in North America, Europe, and Asia:[1]

  • Enterprise struggle with backup and DR for critical databases.  Storage management, security, and administration are among the top challenges.
  • Tier-2 backup requirements are growing. Fifty-six percent of respondents are backing up their Tier-2 applications on a daily basis – double what the ratio was just three years ago.
  • The top benefit of using cloud backup is saving money on storage costs.  Next most cited were the ability to back up more frequently, and saving money on administrative costs.
  • A majority of enterprises want to improve DR capabilities.  Some 79% answered that they agree or strongly agree with the need to improve disaster recovery capabilities in their database environment.  
  • Many plan to extend DR to the public cloud.  Forty-four percent of enterprises either are extending DR to the public cloud or plan to do so.  And ninety-four percent of enterprises that are doing DR to the cloud say it helps tolower costs and improve SLAs.

If you are interested in finding out about Microsoft SQL Server backup to the cloud and cloud DR capabilities, you can read more here.  And if you’re ready to dive in, you can get started with backing up SQL Server 2014 to the cloud using these easy steps.  Greater business continuity awaits in the cloud.



[1] Cloud Backup And Disaster Recovery Meets Next-Generation Database  Demands, a commissioned study conducted by Forrester Consulting on behalf of Microsoft, March 2014


Pie in the Sky (May 2nd, 2014)

$
0
0

Working on some largish projects at work, so not a lot of time to accumulate links this week. Here's what I have, or am planning to read this weekend.

Cloud

Client/Mobile

Node.js

Misc.

Enjoy!

- Larry

Azure SQL Database: Service Tiers & Performance Q&A

$
0
0

A few days ago, I published a post with some anticipated questions & answers to provide details on the new service tiers for Microsoft Azure SQL Database, announced on April 24. In this follow-up post, I want to provide more information about how SQL Database performance is factored into the service tiers.

Like the previous post, this document was originally written to help people on the Microsoft team address common questions about the new service tiers, and the information is certainly relevant to you, as well.

Shawn Bice
Director of Program Management, Data Platform Group

 

How is SQL Database performance improving with the new service tiers?

Our customers have provided consistent feedback that they highly value predictable performance. To address this feedback, we previously introduced a Premium service tier to support database workloads with higher-end throughput needs. We’re continuing our commitment to predictable performance by introducing new service tiers at lower price points (Basic & Standard), which are primarily differentiated on performance. As you move up the performance levels, the available throughput increases. This service design offers customers the opportunity to dial up the right set of resources to get the throughput their database requires.

What changes are being made to Premium?

Starting April 24, Azure SQL Database Premium preview introduces a new 500 GB max size, another performance level (P3), new business continuity features (active geo-replication and self-service restore), and a streamlined provisioning and billing experience.

What new features are available in Premium?

Active Geo-Replication: Gain control over your disaster recovery process by creating up to four active, readable secondaries in any Azure region and choosing when to failover. For more information on using Active Geo-Replication, seeDisaster Recovery documentation.

Self-service Restore: SQL Database Premium allows you to restore your database to any point in time within the last 35 days, in the case of a human or programmatic data deletion scenario. Replace import/export workarounds with self-service control over database restore. For more on using Self-service Restore, see Restore Service documentation.

Larger database size: support for up to 500 GB databases is baked into the daily rate (no separate charge for DB size).

Additional Premium performance level: Meet high-end throughput needs with a new P3 performance level which delivers the greatest performance for your most demanding database workloads. Learn more about SQL Database Premium and pricing by visiting the SQL Database pricing page.

What are performance levels?

Azure SQL Database service tiers introduce the concept of performance levels. There are six performance levels across the Basic, Standard, and Premium service tiers. The performance levels are Basic, S1, S2, P1, P2, and P3. Each performance level delivers a set of resources required to run light-weight to heavy-weight database workloads.

How does a customer think about the performance power available across the different performance levels?

As part of providing a more predictable performance experience for customers, SQL Database is introducing the Database Throughput Unit (DTU). A DTU represents the power of the database engine as a blended measure of CPU, memory, and read and write rates. This measure helps a customer assess the relative power of the six SQL Database performance levels (Basic, S1, S2, P1, P2, and P3). 

Performance levels offer the following DTUs:

BasicStandardPremium
Basic: 1 DTUS1: 5 DTU
S2: 25 DTU
P1: 100 DTU
P2: 200 DTU
P3: 800 DTU

 

How can a customer choose a performance level without hardware specs?

We understand the on-premises and VM world have made machine specs the go-to option for assessing potential power a system can provide to database workloads. However, this approach doesn’t translate in the platform-as-a-service world where abstracting the underlying hardware and OS is inherent to the value proposition and overall customer benefit.

Customers consistently tell us they assess performance needs for building cloud-designed applications by choosing a performance level, building the app, and then testing and tuning the app, as needed. The complicated task of assessing hardware specs is more critical in the on-premises world where the ability to scale up requires more careful consideration and calculation. With database-as-a-service, picking an option, then dialing up (or down) performance power is a simple task via an API or the Azure portal.

Review the performance guide on MSDN for more information.

How can a customer view the utilization of the resources in a performance level?

Customers can monitor the percentage of available CPU, memory, and read and write IO that is being consumed over time. Initially, customers will not see memory consumption, but this will be added to the views during the course of preview.

What do we mean by a transaction rate per hour, per minute, and per second?

Each of the performance levels is designed to deliver increasingly higher throughput. By summarizing the throughput of each service tier as supporting transaction rates per-hour, per-minute, and per-second, it makes it easier for customers to quickly relate the capabilities of the service tier to the requirements of an application. Basic, for example, is designed for applications that measure activity in terms of transactions per hour. An example might be a single point-of-sale (POS) terminal in a bakery shop selling hundreds of items in an hour as the required throughput. Standard is designed for applications with throughput measured in terms of tens or hundreds of transactions per minute. Premium is designed for the most intense, mission-critical throughput, where support for many hundreds of concurrent transactions per second is required.

What if a customer needs to understand DTU power in more precise numbers, a language they understand?

For customers looking for a familiar reference point to assist in selecting an appropriate performance level, Microsoft is publishing OLTP benchmark numbers for each of the 6 performance levels (Basic, S1, S2, P1, P2, and P3). These published transaction rates are the output of an internal Microsoft cloud benchmark which mimics the database workload of a typical OLTP cloud application. As with all benchmarks, the published transaction rates are only a guide. Real-world databases are of different sizes and complexity, encounter different mixes of workloads, and will respond in different ways.  Nonetheless, the published transaction rates will help customers understand the relative throughput of each performance level. The published Microsoft benchmark transaction rates are as follows, and the methodology paper is here.

 

BasicStandardPremium
Basic: 3,467/hourS1: 283/minute
S2: 1,470/minute
P1: 98/second
P2: 192/second
P3: 730/second

 

The car industry provides a great analogy when thinking about DTUs. While horsepower is used to measure the power of an engine, a sports car and a truck utilize this horsepower in very different ways to achieve different results. Likewise, databases will use DTU power to achieve different results, depending on the type of workload. The Microsoft benchmark numbers are based on a single defined OLTP workload (the sports car, for example).  Customers will need to assess this for their unique workload needs.

Defining database power via a published benchmark is a cloud analog of TPC-C. TPC-C is the traditional industry-standard approach for defining the highest power potential of a database workload. Customers familiar with traditional databases and database systems will immediately understand the value and caveats associated with benchmark numbers. We have found newer startups and developers to be less familiar with the benchmarking industry.  Instead, this group is more motivated to just build, test, and tune.

By offering customers both the benchmark-defined transaction rates and the ability to quickly build, try, and tune (scale up or down), we believe most customer performance assessment needs will be met.

Are the published transaction rates a throughput guarantee?

The Microsoft benchmark and associated transaction rates do not represent a transaction guarantee to customers. Notwithstanding the differences in customer workloads, customers cannot bank transactions for large bursts or roll transactions over seconds, minutes, hours, days, etc. The best way for customers to assess their actual performance needs is to view their actual resource usage in the Azure portal. Detailed views show usage over time as a percentage of the available CPU, memory, reads, and writes within their defined performance level.

Why doesn’t Microsoft just publish a benchmark on TPC, the industry-standard in database benchmarking?

Currently, TPC does not permit cloud providers to publish TPC benchmarks for database workloads. There is no other cloud vendor industry standard established at this time.

Will Microsoft publish the benchmark code for customers to run in their own environment?

Currently, there are no plans to publish the benchmark to customers. However, Microsoft will publish the methodology details (here) of how the defined OLTP workload was run to achieve the published benchmark numbers.

In-Memory OLTP Sample for SQL Server 2014 RTM

$
0
0

SQL Server 2014 introduces the new In-Memory OLTP feature to boost performance of OLTP workloads. In an earlier blog post, Quentin Clark described how In-Memory OLTP has helped customers achieve performance gains up to 30X.

To help you get started with the new In-Memory OLTP feature in SQL Server 2014 we created a sample around sales order processing based on the AdventureWorks sample database. This is an update of the sample we published back in November following the CTP2 release.

Installation instructions and documentation can be found on MSDN:

http://msdn.microsoft.com/en-us/library/dn511655(v=sql.120).aspx

The sample script can be downloaded from Codeplex:

https://msftdbprodsamples.codeplex.com/releases/view/114491

We encourage you to download and install the sample to become familiar with the new memory-optimized tables and natively compiled stored procedures, introduced by the In-Memory OLTP feature in SQL Server 2014.

The documentation also contains instruction for running a demo workload which can be used to measure the performance of the sample on your system, and to contrast the performance of the new memory-optimized tables with traditional disk-based tables.

You can post feedback and questions about the sample on the SQL Server Samples Forum.

Introducing the AzureCAT PPI Theater at PASS BA

$
0
0

The AzureCAT (Customer Advisory Team) is returning to the world of PASS and joining all of you Data lovin’ folks at the PASS BA conference this week in sunny San Jose!  For those you that aren’t familiar with AzureCAT, we are a Microsoft organization in the Cloud and Enterprise division that spends 100% of our time engaging with customers to make the most complex scenarios in the Azure and SQL space work like a charm.

This week at PASS BA, you’ll see us hanging out at the Microsoft booth, attending some of the great sessions and you’ll also find us at our own CAT PPI theater on the tradeshow floor.

Below are some bios of the AzureCATs that will be there. You’ll also see our planned talks and schedules.  Those of you that know AzureCATs know that’s the least of what we’ll cover.  We’re hanging out for your questions and impromptu sessions as interest arises.

Come on by our PPI Theater and say hi.

AzureCATs at PASS BAC

 Olivier Matrat

Hi, I’m Olivier and I am a data professional with more than 18 years of experience in technical, customer-facing, and management capacities at organizations of all sizes; I’m talking start-ups to multinationals. I lead a team of AzureCAT experts helping customers, partners, and the broader community be successful in their Big Data analytics projects on the Azure platform. I’m a Founding Partner member of the PASS Board of Directors, so I have PASS in my blood.  I’m also French and incidentally own the best French bakery in Redmond.  If you aren’t interested in analytics, ask me how to make a great croissant! Looking forward to talking with all of you about social sentiment analytics in my “Tapping the tweets – Social sentiment analytics at Internet scale in Azure” talk.


 Murshed Zaman

Hello!  I’m Murshed, a Senior Program Manager in AzureCAT.  I spend my time helping customers working with SQL Server Parallel Data Warehouse, ColumnStore, Hadoop, Hive and IaaS. Over the last 12 years, I’ve specialized in telecommunications, retail, web analytics and supply chain management and for over 7 years I’ve worked with Massively Parallel Processing (MPP). Right now my main areas of focus are in design, architecture and Distributed-SQL plans. This year at PASS BA I’ll be sharing my thoughts on Big Data and Big Compute in “Connecting the Dots – Risk Simulation with Big Data, Big Compute and PDW”.  Looking forward to meeting you there! 


 

 Chuck Heinzelman

I’m Chuck.  I am a Senior Program Manager with the Microsoft Azure Customer Advisory Team, and I have been a member of the PASS community since 2000.  My primary focus is on cloud-based analytics, and I’ve also dabbled in matters related to hardware, OS configuration and even application development.  Like a certain snowman from a recent hit animated movie, I’ve been known to like warm hugs – as well as non-fat white chocolate mochas.  Feel free to bring one or both at my Cloud Applications Without Telemetry?  Surely You Can’t Be Serious! or BI in Windows Azure Virtual Machines: From Creation to User Access talks.


 John Sirmon 

Hi, I’m John Sirmon.  I’m a Senior Program Manager on the AzureCAT team. I’ve been working with SQL Server for over 10 years and I’m loving the BI space.  In my 9-5 life I specialize in Analysis Services performance tuning, Reporting Services, SharePoint integration, troubleshooting Kerberos Authentication and PowerPivot for SharePoint.  In my spare time I am the lead singer/guitarist of a local Rock Band in Charlotte, NC. 


 Chantel Morin

I am a member of the Microsoft Azure Customer Advisory Team (AzureCAT). For the past 4 years I’ve been the assistant to Mark Souza, our General Manager. In the last year I’ve shifted my focus more towards my passion for community and events. I’m also ramping up to assist with customer onboarding into Azure TAP programs. I have the best team and manager in all the land and when I’m not enjoying work for pay I like to travel to music festivals, ride ATVs and spend time with my two pitbulls, Max and Tucker.   You can find me at the Microsoft Information Desk during the event.


Sessions at the CAT PPI Theater

Connecting the Dots – Risk Simulation with Big Data, Big Compute and PDW 
Thursday May 8th at 12:20pm, Friday May 9th at 9:20am

Microsoft Azure offers you a platform that allows you to migrate your big compute and big data needs to the cloud, while Parallel Data Warehouse (PDW) can be used on-premises as a query engine for data that you store both on-premises and in Microsoft Azure Storage.  Through the use of Microsoft Azure HPC clusters, HDInsight clusters and PDW, we’ll discuss risk simulations and data aggregations which include hybrid on-premises/cloud scenarios, and demonstrate using these technologies over data generated during the session.

Tapping the tweets – Social sentiment analytics at Internet scale in Azure
Thursday May 8th at 12:50pm, Friday May 9th at 12:50pm

Twitter and other social media channels have become an integral part of many organizations’ marketing strategies. Microsoft Azure provides a ubiquitous platform to acquire, monitor, process, store and analyze those all-important brand loyalty and CSAT signals. Through the use of a mix of first and third party tools as well as open source solutions, we will illustrate how to infer actionable insights from the ambient social noise at scale.

Cloud Applications Without Telemetry?  Surely You Can’t Be Serious!
Thursday May 8th at 9:20am, Friday May 9th at 12:20pm

Analytics isn’t limited to line of business data – your applications can (and should) generate high quality data that can be used to determine things like:

  • Am I meeting my application SLAs?
  • What is my general customer experience like?
  • Do I need to scale up or down based on demand?

In the traditional on-premises world, you probably didn’t spend a lot of time thinking about application monitoring and telemetry – you were in full control of the entire environment.  If things weren’t right – either from a connectivity or performance perspective, you could easily look at the systems to see what was going on.

Fast-forward to the cloud-based world.  You are now running on servers that you don’t control, and using services that are shared among other consumers, and don’t necessarily have access to all of the data you are used to having.  That is why you need to add telemetry to your applications and services.

The AzureCAT has published a framework for gathering telemetry that is based on many customer engagements.  We’ll spend time talking about what data to gather, how to gather it, and how to consume it once you have it.

BI in Windows Azure Virtual Machines: From Creation to User Access (room 230A)
Conference Breakout: Thursday May 8th at 4pm

Running BI workloads in Windows Azure Virtual Machines can present a whole new world of challenges. While the tools are largely the same between the IaaS and on-premises implementations, your solutions for authentication and authorization could be significantly different in the cloud. 

We’ll start out by talking about how to use the standard gallery images to run BI workloads in IaaS, and then discuss building custom scaled-out BI infrastructures in Azure Virtual Machines. From there, we will dive into the different authentication and authorization options you might want to take advantage of – options that will work both in the cloud and on-premises, but are especially useful in a cloud-based environment.

And potentially as special treat …

Details to come but there’s a good chance you’ll see John Sirmon from the AzureCAT team at the theater.  This man LITERALLY wrestles alligators as well as analytics.  (No alligators will be harmed in the making of this PASS BA talk)

Microsoft adds forecasting capabilities to Power BI for O365

$
0
0

The PASS Business Analytics Conference -- the event where big data meets business analytics – kicked off today in San Jose. Microsoft Technical Fellow Amir Netz and Microsoft Partner Director Kamal Hathi delivered the opening keynote, where they highlighted our customer momentum, showcased business analytics capabilities including a new feature update to Power BI for Office 365 and spoke more broadly on what it takes to build a data culture.

To realize the greatest value from their data, businesses need familiar tools that empower all their employees to make decisions informed by data. By delivering powerful analytics capabilities in Excel and deploying business intelligence solutions in the cloud through Office 365, we are reducing the barriers for companies to analyze, share and gain insight from data. Our customers have been responding to this approach through rapid adoption of our business analytics solutions -- millions of users are utilizing our BI capabilities in Excel and thousands of companies have activated Power BI for Office 365 tenants.

One example of how our customers are using our business analytics tools is MediaCom, a global advertising agency which is using our technology to optimize performance and “spend” across their media campaigns utilizing data from third party vendors. With Power BI for Office 365, the company now has a unified dashboard for real-time data analysis, can share reports, and can ask natural-language questions that instantly return answers in the form of charts and graphs. MediaCom now anticipates analyses in days versus weeks and productivity gains that can add millions of dollars in value per campaign.

One of the reasons we’re experiencing strong customer adoption is because of our increased pace of delivery and regular service updates. Earlier this week we released updates for the Power Query add-in for Excel and today we are announcing the availability of forecasting capabilities in Power BI for Office 365. With forecasting users can predict their data series forward in interactive charts and reports. With these new Power BI capabilities, users can explore the forecasted results, adjust for seasonality and outliers, view result ranges at different confidence levels, and hindcast to view how the model would have predicted recent results.  

In the keynote we also discussed how we will continue to innovate to enable better user experiences through touch-optimized capabilities for data exploration. We are also working with our customers to make their existing on-premises investments “cloud-ready”, including the ability for customers to run their SQL Server Reporting Services and SQL Server Analysis Services reports and cubes in the cloud against on-premises data. For cross-platform mobile access across all devices we will add new features to make HTML5 the default experience for Power View.

To learn more about the new forecasting capabilities in Power BI for O365, go here. If you’re attending the PASS Business Analytics Conference this week, be sure to stop by the Microsoft booth to see our impressive Power BI demos and attend some of the exciting sessions we’re presenting at the event. 

Viewing all 808 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>