Data insights using publicly available, localized COVID-19 data mashup with internal data is the key in effectively managing business operations during the COVID-19 pandemic.
Organizations always have data to assist them in managing their operations and in forecasting sales and revenues. In today’s environment, external CVOID-19 data combined with internal data has become a necessary tool for organizations’ leaders in understanding and addressing three key areas including
Business impact – Scale of impact
Response – Operations continuity
Path forward – Agile operations
Organizations, especially smaller ones, have better chances of surviving and thriving in this economy if they have access to some real-time quantitative analytics.
Even with the availability of public data on COVID-19, I am finding organizations struggling to have simple dashboards for quick data insights. Recently, I started assisting my clients to develop a simple format of assimilated data dashboards using COVID-19 data overlaying their internal data. They were able to quickly view their impacted market segment using relevant “What-if scenarios and trends”. The turnaround time was quick and impactful. I am seeing great results with customers who have taken this approach of having quick analytics.
In recent years, the term data science has become more popular due to the influx of data in all businesses. Data science is about getting valuable insights and answering questions by analyzing data using statistical methods, computing power, and automation. When a business is looking to answer a data-driven question, they must follow a set of predefined steps, known as the data science process, and know what these steps involve.
The process of data science includes more than one role. These roles within this process includes business analysts, data engineers, data scientists, and developers. Even though there can be some overlap, each of these roles is important and plays a vital part in the process. The business analyst provides the business understanding to guide the project, The data engineer prepares the data for use by the data scientist in model training, The data scientist must understand the data to train and test the model. The developer is responsible for model deployment and operationalizing.
These days organizations are finding it hard to retain talent for their data science processes. Fueled by big data and AI, demand for data science skills is growing exponentially, according to job sites. The supply of skilled applicants, however, is growing at a slower pace. According to a KMPG CIO Survey, taken by over 3,600 technology leaders at companies across the U.S., showed that 46% of chief information officers see “big data and analytics” as the area most suffering from a shortage in the nation’s job market. One way to address this shortage is by partnering with vendor(s) who offer data science services. This approach is important to provide in house data science teams resources including industry knowledge, skills and experience to deliver great data products for data-driven decision making. Most of the vendors offer these services on project basis, This is a great approach to accelerate data work in large organizations, but this approach is hard to sustain for long period of time due to cost especially for small to midsize companies. This can cause the data initiatives to slow down or not get delivered. The model which I found to be more effective for long period of time especially for small to medium size businesses is the DSaaS (Data Science As A Services) model, where the client has access to the entire data science team on a monthly subscription basis. This model can keep the cost down and take away the headaches which goes along retaining a large data science team. Another reason I like this approach because it is aligned with the agile philosophy of delivery which has higher rate of success than the traditional waterfall approach. There are few firms that are offering data strategy and engineering services in this format like datatelligent.ai that delivers customized analytics and AI solutions.
The Data Lake feature allows you to perform analytics on your data usage and prepare reports. Data Lake is a large repository that stores both structured and unstructured data. Data Lake Storage combines the scalability and cost benefits of object storage with the reliability and performance of the Big Data file system capabilities. The following illustration shows how Azure Data Lake stores all your business data and makes it available for analysis.
The goal of cloud computing is to make running a business easier and more efficient, whether it’s a small start-up or a large enterprise. Every business is unique and has different needs. To meet those needs, cloud computing providers offer a wide range of services. Cloud compute services including Virtual Machines, Containers, App Service and Serverless computing offer application development and deployment approaches if applied correctly can save time and money. Each service provides benefits as well as tradeoffs against other options. IT needs to have a good understanding of these compute services.
Virtual Machines (VM) is an emulation of a physical computer, which offers more control that comes with maintenance overhead.
Containers provide a consistent, isolated execution environment for applications. They are similar to VMs except they don’t require a guest operating system. Instead, the application and all its dependencies is packaged into a “container” and then a standard runtime environment is used to execute the app. This allows the container to start up in just a few seconds, because there’s no OS to boot and initialize. You only need the app to launch.
Serverless computing lets you run application code without creating, configuring, or maintaining a server. Each approach is optimized for specific use case. The core idea is that your application is broken into separate functions that run when triggered by some action. This is ideal for automated tasks.
The prefoliation of SaaS (Software as a Service) has made the delivery of technology for the business easier, faster and cheaper. SaaS is now a common system of record for organizations. This change has revolutionized the modern workplace and changed the traditional way of managing and securing the IT services for the organization. This shift has brought a completely new paradigm for IT teams on how to manage, secure and support this new landscape.
Organizations must understand exactly how SaaS applications operate and interact with each other. That includes understanding information that needs to be centralized and discovered and build insights on the data that is relevant to increase operational efficiencies. In order to reduce security risks and increase compliance, organizations must introduce automation where possible, and applying analytics on operational data to avoid alert fatigue.
A comprehensive data strategy including centralization, discoverability, insights, action, automation, delegation and auditability is needed to fill the gaps introduced by today’s SaaS environments and to gain the level of control and clarity that is essential for properly securing the corporate environment.