Netapp Storage CVO (Cloud Volume ONTAP) in Azure

Netapp Volume in Cloud is exciting and specially the Netapp ONTAP product. ONTAP is abbreviated form of Open Network Technology for Appliance Products

ONTAP is separate license in Netapp Storage options in public Cloud offerings and it uses separate Storage appliances in Cloud.

ONTAP is great for providing the low latency high throughput IO operations at scale and speed for Block storages typically used for high volume of data transactions in milliseconds response. ONTAP has other exciting offerings of Data Tiering and simplifying the entire cloud data management at single glass on pane. Below is some key data captured for ONTAP in Cloud

  • Simplify storage management by easy setup using Azure Portal , CLI, Powershell, REST API
  • Multi protocol support (NFS/ SMB/ CIFS/ iSCSI)
  • Three performance tiers – Standard, premium and ultra and change on the fly with simple click
  • Data encryption at rest, RBAC, AD auth and network based ACL’s available
  • HIPPA and GDPR complaint with default 99.99% availability
  • Easy management of storage
  • ONTAP featured with thin/dynamic provisioning, data compresssion, de-duplication, compression, data tiering, flexible cloning, snapshot copies, ease of management via GUI or RESTful API and reduce cost footprint
  • ONTAP manages Netapp Block storage volume where multi instance read/write (typical for K8s statefulset workload) , Cloning , Data Protection, Data replications across AZ and Disaster recovery across Azure regions for HA
  • ONTAP supports iSCSI protocols for Block Storage and NFS/SMB for File
  • Non disruptive(zero performance penalty for running applications) Instantaneous point-in-time Snapshots , Applications and Database Integrated
  • Cross site snapMirror Data replication for backup / DR/ restore/ cloning and archive
  • Data tiering to move Hot Block Storage data to Object store and back and forth to save cost
  • Support Hybird Cloud Data movement and easy management by Netapp Cloud Manager
  • multi-AZ,dual-node,HA solution, designed for zero data loss (RPO = 0) and under 60-second recovery

Lets see some Demo how the ONTAP options works assuming you have already ONTAP product option picked from Azure Offerings page in Marketplace Product section.

20 Minutes HA Cluster for Deployment

API Driven
Cross region deployment
Multi Protocol Provisioning
Adding new volume
Choosing Data tiering / retention
iqn copy from Windows VM
Adding volume for the iSCSI map to Host and mapping and make it available

Initialization of Disk
Replication Setup
Target Volume for Replication
Choose replication policy
Volume replication
Primary to DR site replication
Replication features

And That’s IT !

Delta Sharing Open source by Databricks

Vendor agnostic open protocol to share the data real-time which simplifies cross organization data sharing securely and give access to that data to be analyzed by array of tools typically BI tools .

Below excerpt from the SparkAISummit of 2021 and hopefully this will help summarizes the current capabilities of Delta Share


Sharing the Data to receipient

Data Sharing

For Data analyst to connect via Pandas Data Frame

For Analyst to Connect Delta Share via BI Tools like Tableau

Access Delta Sharing Data from PowerBI

Databricks & Perl in Azure Cloud

Really ! probing Perl “the Legacy” to see how compatible with Modern “5th generation” Data Platform ! I guess this is terrible subject to pick .
Yes, this thread is intended to criticize Perl programming and discourage using it. Perl is unique ,exotic but not attracts developers community as mainstream languages for past couple of years. The demise is not sudden but gradually painful for Perl community .

The objective of this thread is to prove how can you use Perl to connect with modern Delta Lakehouse platform Databricks not a simpler ways but with all kinds of highly discouraging bells and whistles. This is a assessment with facts and lets dive through that:

Perl usability in Azure

To use Perl in Azure it can be done :

  • using Perl Azure SDK – Not recommended as not supported yet by Microsoft – Its Alpha quality code by Open Source Community not stable yet and hence not available to CPAN (Comprehensive Perl Active Network) community . 2 Year old last commit in Github. 

       For reference : Azure Perl SDK GitHub

  • via Azure REST API – Recommended Approach which provides service endpoints that support sets of HTTP operations (methods) allow create, retrieve, update, or delete access to the service’s resources   Azure REST API Index

       For reference: Azure REST API GitHub

Connection to Databricks

  • There is no native or direct out of box connectivity exist unlike Perl DBI modules which is connected to Data sources like MySQL or Oracle directly
  • The connection must needs to happen via External JDBC or ODBC connector to Databricks Spark framework
  • Unlike Python , R , Java, Scala which is directly supported by Databricks Notebook Perl is not supported and there is no plan either to add that support
  • Perl can still be used to create Databricks resources in Azure and manage it via Azure API but to interact with resources  (Clusters , Jobs , Notebooks) in Databricks workspace required Databricks REST API

Database access using Perl DBI and SQL


The Perl scripts use DBI, which in turn uses the appropriate Database Driver (e.g. DBD::Oracle for OracleDBD::Pg for PostgreSQL and DBD::SQLite to access SQLite).

Those drivers are compiled together with the C client libraries of the respective database engines. In case of SQLite, of course all the database engine gets embedded in the perl application.

As an example for Oracle connectivity

Connecting Databricks from Perl (Using JDBC)

Thrift JDBC/ODBC Server (aka Spark Thrift Server or STS) is Spark SQL’s part of Apache Hive’s HiveServer2 that allows JDBC/ODBC clients to execute SQL queries over JDBC and ODBC protocols on Apache Spark.

Spark ThriftServer convert ODBC/JDBC calls into a format that can be distributed to and processed by highly-parallel engine like Spark in an efficient manner.

With Spark Thrift Server, business users can work with their Shiny Business Intelligence (BI) tools, e.g. Tableau or Microsoft Excel, and connect to Apache Spark using the JDBC/ODBC interface. That brings the in-memory distributed capabilities of Spark SQL’s query engine (with all the Catalyst query optimizations) to environments that were initially “disconnected”.



  • A Virtual Machine in Azure (either Windows or Linux) – In this case I am using Ubuntu 18.04 bionic image
  • Perl 5.8.6 or higher
  • DBI 1.48 or higher
  • Convert::BER 1.31
  • DBD::JDBC module
  • Java Virtual Machine compatible with JDK 1.4 or Above
  • A JDBC driver – Simba JDBC Driver as Databricks recommended and supported
  • log4j 1.2.13 extension

Install Perl and associated Dependencies in VM

There are many ways to install Perl but best to use PERLBREW . Execute all this after SSH to the VM in Cloud.

dxpadmin@dxptestvm:~$ \curl -L | bash


$wget –no-check-certificate -O – | bash

Note: Install gcc , csh all other dependent packages if required for above if perlbrew failed.

Now Initiate Perl :

perlbrew init perlbrew install perl-5.32.1 perlbrew switch perl-5.32.1

Install all prerequisite for DBD::JDBC module

$sudo apt install default-jre $sudo apt install default-jdk $java -versionopenjdk version “11.0.10” 2021-01-19 $javac -versionjavac 11.0.10

Download DBD::JDBC module from CPAN:


Install Dependent Module:

$cpan -D Convert::BER

Unarchive and Edit the below file and property:

$vi /home/dxpadmin/DBD-JDBC-0.71/ = ALL

Correction of DBD module and Makefile :

edit 03_hsqldb.t  file inside extracted DBD Archive directory : /home/dxpadmin/DBD-JDBC-0.71/t and search for ‘exit 1’ at end of file and comment it out and Save then do below:

$perl Makefile.PL $make $make test

Note: for different version of Perl or different OS package you might need gmake or dmake whichever compatible

If required use this:

$cpan force install DBD::JDBC

SIMBA SPARK JDBC and Connectivity with Databricks

Download SimbaSpark JDBC Driver 4.2 


Note : Simba 4 / 4.1 / 4.2 all supported for the connectivity. Simba JDBC driver for Spark support look at here.

Start the JDBC Proxy Server as below:

$export CLASSPATH=/home/dxpadmin/DBD-JDBC-0.71/dbd_jdbc.jar:/home/dxpadmin/DBD-JDBC-0.71/t/hsqldb/log4j-1.2.13.jar:.:/home/dxpadmin/simba/SparkJDBC42.jar:$CLASSPATHsource
$java -Djdbc.drivers=$DRIVERS -Ddbd.port=9001 com.vizdom.dbd.jdbc.Server

It will be Accepting inbound connection as below :

PERL code to connect Databricks (
use strict;
use DBI;
my $user = "token";
my $pass = "dapidbf97bbmyFAKEpassautha564de4d68055";
my $host = "";
my $port = 9001;
my $url = "jdbc:spark://"; # Get this URL from JDBC string of Databricks cluster. the URL encoding is VERY Important else it leads to failure connection with weird errors
### my $url = "jdbc:spark://;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/8532171222886014/1005-143428-okra138;AuthMech=3";  # (Not use this as IT WILL GENERATE Authentication ERROR with SSL / Certificates)
my %properties = ('user' => $user,
'password' => $pass,
'' => $host,
'host.port' => $port);
my $dsn = "dbi:JDBC:hostname=localhost;port=$port;url=$url";
my $dbh = DBI->connect($dsn, undef, undef,
{ PrintError => 0, RaiseError => 1, jdbc_properties => \%properties })
or die "Failed to connect: ($DBI::err) $DBI::errstr\n";
my $sql = qq/select * from products/;
my $sth = $dbh->prepare($sql);
my @row;
while (@row = $sth->fetchrow_array) {
print join(", ", @row), "\n";

In above example I am running simple select from Products table in Default Schema in Databricks Workspace . Lets look at the table data:

Now lets execute Perl Script :

This is what the JDBC Proxy server Logged : Ignore the LOG message as per Simba Documents

For Windows Azure VM requires setting up Strawberry Perl , Download DBD::JDBC module and copy the content for extracted DBD:JDBC to inside Strawberry directory : C:\Strawberry\perl\vendor\lib\DBD

It doesn’t need to run Makefile.PL and it should be ready to initiate JDBC proxy server to accept connections.

This is how it will look like in Azure Windows VM

Now Pick the Spark Driver and Add JDBC connection string (as provided by Databricks) to see successful connection : (add string without URL encoding and it will work here)

NOTE: An attempt made with similar setup configuration from Local windows machine with no success because of certificate issues thrown up . 


If the URL encoding is not done properly in ‘url’ string inside the above Perl code, JDBC proxy server will throw below error :

And this in turn leads to very confusing Authentication / SSL / Certificates/ SocketException errors like below :

Though there is various other methods exist to connect via Simba like No Authentication , OAuth 2.0 , LDAP User/Password, Use KeyStore and SSL, I have no success in either approach except the Simple Databricks Auth and Token

Challenges with Perl Language

Popularity :

Below programming index shows that Perl language value diminishes year by year .

<1% in Programming community Index

Community Update

Slow community Update. Update on Perl SDK on top of Azure or any Cloud lag by 2-3 years . No Microsoft support using Perl SDK yet . 

Usability / Sustainability

Not Developer friendly . High learning curve , complex syntax and programming paradigm than language like Python.

This put immediate risk on sustainability in long term of Perl being put in front on any modern language to talk to Database. 

Perl community can’t attract any new developers and beginner users like Python successfully has

Community Support

Lacking day by day , Lack of support in popular developers channel like StackOverflow. I had couple of stackoverflow thread on Perl with very less view and reply from just one person which is not helpful either . So the community is not attractive anymore to any developers


Use Modern language with Modern Database support for better compatibility , better adoption and availability of easy and fast knowledge powers . This helps scale and make business sustain for years.

The level of trouble I had with Perl I never feel its worth to try Perl ODBC module for DB connectivity.

So if you love pain with your programming platform with zero to no help sure go ahead but get ready to end up with roadblock !

Gospel of Azure Storage comparisons

Storage is the backbone of any Cloud IaaS, PaaS or SaaS solution typically managed by Cloud provider. The importance is to identify the capabilities of different storage options and differentiate them and pick the right choices is key to the success of technology implementation. Here is the listed difference at high level but at low levels there could be many differences.

Note: The scope of below is to lay out the differences offered by Microsoft Azure only and not any third party storage options like Netapp or Purestorage or MinIO storages.

Azure DBaaS Family offerings & comparison – All in one place

With technology evolutions in Cloud we see varieties of options available to choose right Database platform. However decision making on what is right or wrong is where the goal of Architect lies to help business figuring out adopting tech. As we know there are several Azure Database as Service options available but hardly you will get below matrix anywhere coupling all services and offerings and detailed on features around it .

So lets take a look. This information is personally collected , checked, lot of them tested and verified with recognized sources. If this is helpful feel free to leave comments in feedback . Just a quick note as time moves we can see change on some of below areas and corresponding features so the recording time of the data is around March 2021 subject to change as new and changed offerings are applied.

MySQL on Premise to Azure Cloud Migration

Objective is to analyze offline Migration of the MySQL Database instance from On Premise to Cloud w/o help of Azure Expressroute or Dedicated VPN Gateway or Private Endpoints. Also understand the process of offline migration , how much data it can handle , troubleshoot different issues and the pros and cons of following one Data Export / Import process vs Other and pick right choices in Cloud (Managed vs VM) . This drives to Cost assessment , provide limitations on certain choices . The right choice for big bulk migration is definitely DMS solutions with minimal downtime and leverage the dedicated connections but that steps are out of scope of this thread


  1. MySQL DB Instance in Cloud
  2. Identify Sample Data to be migrated from On Premise
  3. Configure Cloud / Network / Database
  4. Export Data dump / Import Data


This PoC will try to achieve following :

  • Create MySQL Database instance in Cloud / Pick right choice  
    • Azure VM using Marketplace MySQL Image
    • Azure Single Server MySQL Managed Instance
  • Pick the right option for MySQL Migration
    • Using mysqldump or PHPMyAdmin (Offline Backup/ Restore)
    • Using MySQLWorkBench ( Offline)
    • Using DMS  (Continuous Sync with Mininmal Downtime)
    • Using mydumper / myloader tool
      • mydumper to Dump the Database
      • myloader to load to target MySQL instance
  • Understand Network Config/ Firewall/Storage and other dependent setup required for Offline migration
    • Network Inbound / Outbound Rule for Azure VM
    • Network Inbound / Outbound Rule for MySQL DB Access
  • Understand Data Volume , Latency for generating data dump and Transfer over to Cloud Landing Storage
  • How to mount and use Cloud Landing shared location and get ready for Data Import
  • Import options for loading Data dump
  • Produce Benchmark options / Analysis to improve performance
  • Cost Assessment and Comparative cost analysis with On Premise MySQL instances

Platform : 

  • On Premise MySQL Standby Slave Instance (Production  ‘mvh’ database) and ‘mvh_test’ Test Database
  • Cloud Azure
  • Azure VM
  • Any other onPrem VM’s for SCP file to Cloud

Note: I have been following microsoft document ” and I have failed exporting and import through MySQL workbench in many ways with lot of challenges which I have reserved for separate thread/blog to explain.

1.0 Setup Cloud VM for MySQL

Creating Virtual machine with MySQL 5.7 Ubuntu Image

Have the inbound port enabled for access (not recommended for Any IP on port 22 for Production workload – should be target only inbound IP’s)

Attached separate Disk other than OS for MySQL Data workload (Follow more steps outlined required for OS to recognize the Disk)

Install mydumper 

Refer this about my dumper tool and this Github thread

ssh dxpadmin@<publicIP>  and then give the password (configured during VM setup)

Find the OS version and Image :  cat /etc/os-release

If prebuilt Image that has been picked is Ubuntu 5.7 will be SSH to this box from either local or any TC box as :  

and install below packages:

$wget$(lsb_release -cs)_amd64.deb

$sudo dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb 

If below error follow the commands in sequence:

root@tcmysql:~# dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb

dpkg: error: dpkg status database is locked by another process


root@tcmysql:~# lsof /var/lib/dpkg/lock


dpkg    14397 root    3uW  REG    8,1        0 56697 /var/lib/dpkg/lock

root@tcmysql:~# ps cax | grep 14397

14397 pts/0    Ss+    0:07 dpkg

root@tcmysql:~# kill 14397

root@tcmysql:~# ps cax | grep 14397

root@tcmysql:~# sudo rm /var/lib/dpkg/lock

root@tcmysql:~# sudo dpkg --configure -a

If Prebuilt Image is RedHat/CentOS use below:

yum install
yum install

1.1 Extract the Sample dump from MySQL Database

Extracting Rate Fact table (Current and Historical) data from MySQL Production Slave instance

Install mydumper/myloader utility to any TC Host which can be used as Landing zone for data:

RedHat / Centos

yum install
yum install

Ubuntu / Debian

wget$(lsb_release -cs)_amd64.deb
dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb

Login to any Host which can reach the MySQL Database and execute below: See usage here:

sudo mydumper -B mvh -T hotel_rate_wes2 --host=bidbhotl01.gain.tcprod.local --user=dbuser --password=xxxx --rows=1000000 --outputdir=./hotel_rate_wes2/ --compress --no-locks --build-empty-files --threads=256 --compress-protocol --verbose=3 -L mydumper-logs_hotel_rate_wes2.txt &

This will create the incremental dump files for current rate (hotel_rate_wes2) table from mvh schema. 

Execute similar command to dump files for Historical Changed rate (hotel_rate_ih_wes2) table from mvh schema. 

A benchmark from the mydumper tool is below:

Table Name (Production)Total RecordsDump TimeDump SizeNumber of Threads usedNumber of Files Generated
hotel_rate_wes223,360,8173 Mins19 GB25625
hotel_rate_ih_wes2126,273,32812 Mins320 GB256130

1.2 Create Azure Storage account Fileshare

This is one of the Preconfigured Storage Account

Inside “dxpfileshare” create “data” directory

Either create Subdirectory “mvh_hotel_rate_wes2” using Azure Portal or Connect to Azure Cloud Shell and create directory to hold the Data Dump files

1.3 Mount Storage Account Fileshare inside Azure VM

This step will remote copy the Dump files generated by mydumper tool and transfer over to Cloud Storage

dpaul@<onpremvmhost> ~ $ ssh dxpadmin@<vmhostip>
dxpadmin@tcmysql:~$ sudo su -

In Storage Account in Azure and Copy the code to mount FileShare inside your Virtual Machine 

Execute below commands inside Azure VM (SSH to the VM box)

sudo mkdir /mnt/dxpfileshare
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
if [ ! -f "/etc/smbcredentials/dxppocstorageaccount.cred" ]; then
    sudo bash -c 'echo "username=dxppocstorageaccount" >> /etc/smbcredentials/dxppocstorageaccount.cred'
    sudo bash -c 'echo "password=<replaceme>" >> /etc/smbcredentials/dxppocstorageaccount.cred'
sudo chmod 600 /etc/smbcredentials/dxppocstorageaccount.cred

sudo bash -c 'echo "// /mnt/dxpfileshare cifs nofail,vers=3.0,credentials=/etc/smbcredentials/dxppocstorageaccount.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs // /mnt/dxpfileshare -o vers=3.0,credentials=/etc/smbcredentials/dxppocstorageaccount.cred,dir_mode=0777,file_mode=0777,serverino

1.4 Transfer the file from OnPrem TC Host to Cloud Storage Fileshare

From Local TC Host transfers those files over to Storage Account

dpaul@<host> /u00/dump $ nohup sh -c 'date;scp -r hotel_rate_wes2/ dxpadmin@<azure VM Host>:/mnt/dxpfileshare/data/1;date' 2>&1 > remote_transfer.log &

dpaul@<host> /u00/dump $ nohup sh -c 'date;scp -r hotel_rate_ih_wes2/ dxpadmin@<azure VM Host>:/mnt/dxpfileshare/data/;date' 2>&1 > remote_transfer_hotel_rate_ih_wes2.log &

Note: for above commands to execute make sure generate the SSH Key in TC Host and add that as authorized keys inside Azure VM Host . This will stop asking password Prompts 

1.5 Network and DB access

Network Inbound rule change

To enable the VM host access to MySQL Database Inbound port 3306 needs to be allowed from Network option on VM .

Note : Mysql DB port 3306 is exposed to the Internet. Its not recommend exposing database ports to the Internet and suggest only exposing them to your front-end tier inside your virtual network.

This practice is applicable for Testing purpose only . For DB migration a dedicated private endpoint is advised 

DB Change

Grants required to be added for Azure OS user ‘root’ and  ‘dxpadmin’ (user created during VM creation) who can connect to port 3306 MySQL services. Note this is required to run myloader to load data as well.

To do this login to MySQL as root user : 

[root@tcmysql data]# mysql -u root -p

mysql> CREATE USER 'dxpadmin'@'%' IDENTIFIED BY '<password>';

mysql> GRANT ALL ON *.* to dxpadmin@'' IDENTIFIED BY '<password>';



mysql> exit

1.6 MySQL DB Config changes

Increase number of DB Connection’s

This is required to avoid error : “Failed to connect to MySQL server: Too many connections  when running myloader with more threads”

edit /etc/my.cnf file to add below params under [ mysqld ] option :

max_connections = 512

Disable Strict Mode

This is required to avoid error : “Invalid default value for <Date> columns” . Refer below image related to this error while loading data:

for e.g. : 

** (myloader:13067): CRITICAL **: 16:29:20.016: Error restoring mvh_test.currency from file mvh_test.currency-schema.sql.gz: Invalid default value for ‘last_update_dt’

** (myloader:2728): CRITICAL **: 00:36:22.806: Error restoring mvh_test.hotel_rate from file mvh_test.hotel_rate-schema.sql.gz: Invalid default value for                  ‘arv_dt’

** (myloader:2728): CRITICAL **: 00:36:20.940: Error restoring mvh_test.hotel_prop_data_source_note from file mvh_test.hotel_prop_data_source_note-schema.                 sql.gz: Invalid default value for ‘create_dttm’

edit /etc/my.cnf file to add below params under [ mysqld ] option :


Change default DB packets/ throughput and other parameters

This is required as performance tuning parameters

edit /etc/my.cnf file to add below params under [ mysqld ] option :

innodb_buffer_pool_size = 40G
max_allowed_packet = 900MB
innodb_read_io_threads = 16
innodb_write_io_threads = 32
innodb_purge_threads = 2
innodb_buffer_pool_instances = 16
innodb_log_buffer_size = 512M
innodb_log_file_size = 1G

Restart the MySQL Database for above change to propagate

For CentOS/Fedora used : 

[dxpadmin@tcmysql~]$ sudo systemctl start mysqld
[dxpadmin@tcmysql ~]$ sudo systemctl stop mysqld
[dxpadmin@tcmysql etc]$ systemctl status mysqld.service

1.7 Attach and Prepare Datadisk Ready for OS and VM

STEP 1 – Attach disk to existing VM

This can be done two ways : Azure Portal Disk section of the Azure VM by adding new LUN which we already did at 1.4 .

Other way would be executing below Azure CLI code

az vm disk attach \
--resource-group myResourceGroupDisk \
--vm-name myVM \
--name myDataDisk \
--size-gb 128 \
--sku Premium_LRS \

STEP 2 – Prepare the Data Disk

As you can see though we had attach Datadisk of 2 TB in STEP 1 it is still not showing up here : 

[root@tcmysql data]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 9.0M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda2 30G 7.0G 23G 24% /
/dev/sda1 494M 65M 430M 13% /boot
/dev/sda15 495M 12M 484M 3% /boot/efi
// 2.0T 349G 1.7T 18% /mnt/dxpfileshare
/dev/sdb1 126G 2.1G 118G 2% /mnt/resource
tmpfs 6.3G 0 6.3G 0% /run/user/1000

So lets execute below steps to make that available:

  • Create an SSH connection with the virtual machine. Replace the example IP address with the public IP of the virtual machine.
ssh dxpadmin@<Public IP Address of Azure VM>

  • Partition the disk with parted.

sudo parted /dev/sdc –script mklabel gpt mkpart xfspart xfs 0% 100%

  • Write a file system to the partition by using the mkfs command. Use partprobe to make the OS aware of the change.
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
  • Mount the new disk so that it is accessible in the operating system.
   sudo mkdir /datadrive && sudo mount /dev/sdc1 /datadrive
  • The disk can now be accessed through the /datadrive mountpoint, which can be verified by running the df -h command.
df -h | grep -i "sd"

The output shows the new drive mounted on /datadrive.

Filesystem Size Used Avail Use% Mounted on

/dev/sda1        30G   7.0G   23G   24% /
/dev/sda15       495M  12M  484M   4% /boot/efi
/dev/sdb1        126G   2G   118G   1% /mnt/resource
/dev/sdc1        2.0T     2.0T   1G   1% /datadrive
  • To ensure that the drive is remounted after a reboot, it must be added to the /etc/fstab file. To do so, get the UUID of the disk with the blkid utility.
sudo -i blkid
  • The output displays the UUID of the drive, /dev/sdc1 in this case.

/dev/sdc1: UUID=”33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e” TYPE=”xfs”

  • Open the /etc/fstab file in a text editor as follows:
sudo nano /etc/fstab
  • Add a line similar to the following to the /etc/fstab file, replacing the UUID value with your own.

UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,nofail 1 2

  • When you are done editing the file, use Ctrl+O to write the file and Ctrl+X to exit the editor.

Now that the disk has been configured, close the SSH session. Finally it will look like this:

Change VM Disk Setting for performance

While VM is stopped make changes on Data Disk Advanced setting in Azure portal

1.8 Move MySQL data directory to Azure Data Disk

STEP 1 – Check current MySQL data directory and Stop services

$ mysql -u root -p
mysql > select @@datadir;

| @@datadir       |
| /var/lib/mysql/ |
1 row in set (0.00 sec)

mysql > exit
$ sudo systemctl stop mysqld
$ sudo systemctl status mysqld

STEP 2 – Use rsync to copy data directory to Azure DD and backup

$sudo rsync -av /var/lib/mysql /datadisk
$sudo mv /var/lib/mysql /var/lib/mysql.bak

STEP 3 – Override MySQL Config file

$ sudo vi /etc/my.cnf

. . .
. . .


STEP 4 – Restart MySQL instance and check datadir changes

$ sudo systemctl start mysqld
$ sudo systemctl status mysqld

$ mysql -u root -p

mysql> select @@datadir;


| @@datadir                  |
| /mnt/volume-nyc1-01/mysql/ |
1 row in set (0.01 sec)

if you don’t do below steps MySQL will not start

STEP 5:  On CentOS or RedHat, modify SELinux settings. On Ubuntu or Debian, modify the AppArmor settings

When you perform copy of data directory or using rsync , the SELinux context will be lost (Note if you move directory that may intact) 

If SELinux is running in enforcing mode, you need to to persuade it to allow mysqld to use the non-default datadir. Or alternatively, put SELinux into permissive mode, or disable it entirely.

Run sestatus to see which mode it’s currently running in.

[root@tcmysql data]# [root@tcmysql data]# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31

Run grep mysql /var/log/audit/audit.log to see if SELinux has blocked any mysql file access attempts or similar.

(But note that this is fine if running in permissive mode, because then it doesn’t actually block anything, it just logs what it would have blocked if you were running in enforcing mode.)

Run setenforce permissive to put SELinux into permissive mode, though note you need to edit /etc/sysconfig/selinux (or /etc/selinux/config) to make the change permanent.

SELinux Context Type for MySQL

This is just a reference before and after mysql directory copy 

If we won’t do the steps to change the filestype mysqld will throw below error and stop:

# cat /var/log/mysqld.log:
mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
[Warning] Can't create test file /var/lib/mysql/devdb..lower-test
/usr/libexec/mysqld: Can't change dir to '/var/lib/mysql/' (Errcode: 13)
[ERROR] Aborting
[Note] /usr/libexec/mysqld: Shutdown complete

STEP 6: Bounce the whole VM 

1.9 Load/Import Dump file using myloader

Prerequisite of myloader

Disable the Foreign Keys to expedite the loading/import process :

SET unique_checks = 0;
SET GLOBAL innodb_stats_on_metadata = 0;

After Import set those variables back :

SET foreign_key_checks = 1
SET unique_checks = 1;
SET GLOBAL innodb_stats_on_metadata = 1;

Execute myloader

With the below state of directory structures execute myloader which will load mvh_test (all tables) and hotel_rate_wes2(which is just one massive fact table with Hotel rates)

$ myloader -s mvh_test -h <public IP address VM> -u dxpadmin -p <password> -d mvh_test -t 500 -C -v 3 -e 2> myloader_mvh_test.log &

$ myloader --host=<public IP address VM> --user=dxpadmin --password=<password> --database=mvh_test --directory=./hotel_rate_wes2/ --threads=500--overwrite-tables --compress-protocol --verbose=3 -e 2> myload_hotel_rate_wes2.log &

Monitor Progress of the Data Loading

# Following gives approximate number of data files already restored
grep restoring <myloader-output-logs-path> | grep Thread | grep -v schema | wc -l
# Following gives total number of data files to be restored
ls -l <mydumper-output-dir> |  grep -v schema | wc -l
# Following gives information about errors
grep -i error <myloader-output-logs-path>

As the data import is in progress we can see /datadrive Azure DataDisk mount storage Used volume keep increasing . So we are successfully able to use Azure DD to store MySQL Database files.

2.0 Connect MySQL Cloud DB from Workbench & Query

Host Address should be VM Public IP Address :  Note InBound Rule should be allowed for your local machine IP to connect to this VM (This was outlined in previous steps)

Also to connect MySQL on port 3306 your local IP address needs to be GRANT permission to access from Database itself ( this process outlines before how to give GRANT privileges to open access for All or certain IP’s) 

Execute sample query as below:

use mvh_test;

hr.rate_id as rid,
if(uncompressed_length( < 16000000,uncompress(, as data,
from hotel_rate_wes2 hr
where hr.shop_data_source = 'WEB'
and hr.guests = 1
and hr.shop_pos_cd = 'US'
and hr.arv_dt <= '2022-02-08'
and hr.arv_dt >= '2019-03-15'
and hr.prop_id in (253,290,291,362)
and hr.shop_currency_cd in ('SGD','USD')
and ((shop_dttm >= date_sub(now(), interval 7 day)) or (err_dttm >= date_sub(now(), interval 7 day)))
and hr.los = 1


Synapse vs Snowflake vs Databricks in Azure

All these three technologies provided modern approach to Cloud Data warehousing but each of them having unique set of features to resolves problem , poses unique challenges to work with. Any modern technology platform for a big enterprise should not take monolithic approach for Data solutions unless clear understanding of business use case and polyglot persistence architecture must be keep in mind when designing the Data store.

Its hard to make the judgement initially about what data store to use for what purpose so does the Research , Proof of Concept and Due Diligence work required when architecting the data solution and this will help building right things in right way

To understand the key difference I have tried to put all three technology comparisons together in one frame and with very high level differentiation however at the low level there could be thousands of other difference on features which is out of scope at present for this thread. As of my writing the differences captured below and this is subject to change in future evolution

Lets deep drive on it and happy to hear feedback/comments below:

Azure Local Zone Geo Redundancy – Connecting the dots

The objective is to simplify the Azure resiliency options explaining the Local Redundancy (LR) , Zone Redundancy(ZR) and Geo Redundancy(GR) . Redundancy offers degree of High availability so does address the SLA % for Fault and PITR (Point in time recovery) for Disaster.

Redundancy is a key objective on Cloud paradigm giving the agility of the application compute and storage options so that any Disaster scenario could be handled with maximum flexibility , lowest possible downtime and with better cost effective option with minimal impact to platform and infrastructure.

Below diagram tries to address the key standards of redundancy from Architecture standpoint . When we talk about redundancy we should always differentiate Storage vs Compute redundancy. Below cases stated mostly applied for Storage redundancy however Compute redundancy is varied across services and offerings.

For e.g. for by-default Managed services having inherent redundancy inbuilt for e.g. Data is either replicated Synchronously three times in primary region using LRS (local redundant storage) and then replicated asynchronously to the secondary region as GRS (Geo redundant storage) or Data is replicated synchronously across 3 AZ’s in primary region using ZRS and then replicated to secondary in asynchronous way as GZRS (Geo Zone redundant storage)

In above example we focus on deploying Azure resources on US Eastern Region (US EAST). Similarly there would be other Geographical region exist making a Regional Pair for Geo Redundant Storage(GRS)

Now Primary EAST US region comprised of multiple Availability Zones(AZ) which is located into different hardware infrastructure . In this case US EAST having 3 AZ’s to give Zone Redundancy. All AZ’s are connected through Azure Virtual network to allow Synchronous replication. These AZ’s are logical entity combining physically different Datacenters.

Within one AZ there could be hundreds of Azure connected resources / services exist spread over one Data Center and across different floor of a building or could be separate out across hardware shelves/RAC’s. That’s how it is giving the local redundancy . LRS services are physically and logically closed to each other to allow minimal downtime for failure and maintenance for patches.

Hope this simplifies some cloudy areas of redundancy and explains the importance of Redundancy/Replication in cloud.