Learning – Webcasts/Video

Learning for this week.

http://www.pass.org/24hours/2017/security/Schedule.aspxhttp://sqlblog.com/blogs/damian_widera/archive/2017/04/20/sql-server-2017-all-sessions-from-microsoft-data-amp-are-online-complete-list-of-links.aspx

http://sqlblog.com/blogs/davide_mauri/archive/2017/05/01/pass-appdev-recording-building-rest-api-with-sql-server-using-json-functions.aspx

http://sqlblog.com/blogs/sergio_govoni/archive/2017/04/15/pass-business-analytics-marathon-march-2017-recordings-available.aspx

http://sqlblog.com/blogs/andy_leonard/archive/2017/03/31/the-recording-for-biml-in-the-enterprise-data-integration-lifecycle-is-available.aspx

http://sqlblog.com/blogs/andy_leonard/archive/2017/03/14/the-recording-for-the-ssis-catalog-compare-version-2-launch-event-is-available.aspx

 

Posted in Others, Webcast, What I learned today | Tagged , | Leave a comment

Azure CosmosDB SQL Server

As you know Microsoft has retrieve the data from Hadoop with by polybase and extended the R language to External language(R and Phython). now Microsoft is wanted to make sure that it has feasibility for all the environment whichever is available in the market. earlier I blog on MongoDB which is happening and favorable to developers for document store and containers concept, CosmosDB is the similar architecture of MongoDB for horizontal data storage , container and Document with the help of DocumentDB.

Microsoft has introduced many things recently(SQL 2016/17):

SQL Server on Linux

SQL Server – PolyBase (Hadoop Compatibility- data retrieval)

Azure SQL Server – CosmosDB – MongoDB

This indicates that Microsoft do not want to be isolated on Windows platform nor want to restrict to small scale… and Its been proved that Microsoft is doing great on this area… the only thing I believe is the SME for those new concepts which has been introduced unless we do not know how to manage it we may not be expert of doing it so.

Lets See..

Reference:

https://azure.microsoft.com/en-in/services/cosmos-db/

https://db-engines.com/en/system/Microsoft+Azure+Cosmos+DB%3BMicrosoft+SQL+Server%3BMongoDB

 

 

Posted in Azure, Cloud, NoSQL, Others, sql 2016, SQL Server 2017, SQLonLinux | Tagged , , , , | Leave a comment

Vnext – Its SQL Server 2017

Yes thats true, SQL Server New version will be release this year and call it “SQL Server 2017” where sql can also work on Windows, MacOS and Linux

 

 

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/sql-server-2017-community-technology-preview-2-0-now-available/

Cooooool.

Posted in Others, SQL Server 2017, vNext | Tagged , | Leave a comment

vNext has AG

Now vNext SQL Server on Linux supports Availability Group HA/DR functionality supported.

 

https://blogs.technet.microsoft.com/dataplatforminsider/2017/02/17/sql-server-on-linux-mission-critical-hadr-with-always-on-availability-groups/

 

 

Posted in Disaster Recovery, High Avaliability, Others, SQLonLinux, vNext | Tagged , , | Leave a comment

SQL Agent on vNext – Linux

SQL Server is running on Linux now with SQLPAL- SQL Platform Abstraction Layer(SQLPAL) -it will work as a virtual Windows server on Linux so I think now Microsoft should able to include things which we are doing on Windows Server.

on that note, Now vNext with CTP 1.4 Microsoft has introduce the SQL Server Agent functionality on vNext.

 

https://blogs.technet.microsoft.com/dataplatforminsider/2017/03/17/sql-server-on-linux-running-jobs-with-sql-server-agent/

 

Posted in Others, SQLonLinux, vNext | Tagged , | Leave a comment

SQL Server on Linux

Yes that is true, now SQL Server is on Linux.

https://www.microsoft.com/en-in/sql-server/sql-server-vnext-including-Linux

Download the public preview

https://docs.microsoft.com/en-us/sql/linux/

I am learning it and would love to write more blogs soon.

 

 

Posted in Others, sql 2016, SQLonLinux | Tagged , | Leave a comment

Future DBA – Big Data -MongoDB 2

In our earlier blog we discussed an introduction to MongoDB. so, how is MongoDB is into BigData, MongoDB has a concept called sharding with replication, so here using sharding it uses a cluster like configuration and data will be load-balance- equally distribute to multiple shard with the shard key.

so if we consider the HDFS- hadoop concept here unlike named node we have config server and shard is like data node. but here we have data is distributed to multiple shards but in hadoop system data is replicated to multiple data nodes. and MongoDB maintain the redundancy by using Replication.

Balancer makes sure that data is distruted equally to all the shards if data is not balanced balancer will run the processes at background and balance the data.

here shard key plays a very important role.

will write more on it later.

 

 

Posted in BIGDATA, Future DBA, Others | Tagged , | Leave a comment

Future DBA – Big Data -MongoDB 1

We have discussed hadoop and its HDFS management tools for big data system, which works horizontal scaling for data distribution and can be used for data warehousing. can manage big data/heavy data

There is another BIG Data system called MongoDB, which is again an open source. this is very developer friendly NoSQL system. as you know about RDBMS it has pre defined record structure and rows size is static. so for developer if they want to think and make some changes in the metadata by adding any columns or make changes in data type of the column will intern has to make the changes into the complete data and its related indexes. MongoDB is document oriented NoSQL.

MongoDB consider record as a document and you can dynamically add the columns into it and for developer its not necessary to input all the column information into one record/document. this way developer likes this database system.

MongoDB is written in C, C++, and java scripts. and it works same as developments.

https://en.wikipedia.org/wiki/MongoDB

 

 

 

 

Posted in BIGDATA, Others | Tagged , | Leave a comment

Future DBA – Hive Big Data 2

In our last blog on Future DBA, we discussed on HADOOP -HDFS system. how as we know HDFS management is quite difficult so with the help of Vendors -Cloudera/Hortonworks/MapR we can integrate the tools/utility in a GUI way and can be manage easily and efficiently.

This HDFS data can be retrieved and inserted using HIVE utility which will provide us the access to HDFS data in a SQL like way and we can create a access the data just like sql queries.

Hive requires the Meta store system, can be any RDBMS opensource -MySQL or PostgreSQL or any other RDBMS which will store the metadata on the HIVE and actual data would be stored in HDFS.

HIVE uses Map-Reduce process for retrieving data from HDFS.

So for DBA we can work on HDFS data efficiently and just like our RDBMS.

 

Posted in BI, BIGDATA, Future DBA, NoSQL | Tagged , , | Leave a comment

Future DBA – Hadoop Big Data 1

In our last blog on Hadoop Big Data we have discussed about Hadoop and its tools/utility to connect to HDFS. on continue to that hadoop is mostly for big data and the on hadoop is stored in HDFS with named node contains pointers /address of data location and data stores contains the actual data. there are multiple data stores and the data on data store is replicated to multiple data nodes for redundancy purpose. and accessing the data on multiple nodes would be faster.

we can store or use one node static to store data as a backup node. Hadoop is used for data warehouse purpose and as its a BIG Data, data stored on it is in bulk /huge and used for read purpose. so if we use hive/impala or any other tool HDFS data can be mostly be used for READ-ONLY data warehouse and used to generate the report and get the data once data is inserted into HDFS. There are mappers to read the data on data nodes.

*HDFS /BIG DATA is effective on data reads and not work best for UPDATES.

 

Posted in BIGDATA, Others | Tagged , | Leave a comment