Downgrade Tableau WorkBook

We may need to downgrade the tableau work book sometime, like if we mismatch server and desktop version difference then we have to match our Desktop version proper same as Tableau server so we can publish our workbook on server.

Today will show you trick, How can we down grade Tableau workbook, It’s not work if you have use any of the feature that would be in New version but not in older version.

For Example if I want to downgrade from 9 to 8.3, if I use any of the latest features of 9.0 version in development of charts then it’s not supported in downgraded workbook.

Do this on your own risk …………………Please make copy of everything before do this.

Follow below steps to down grade Work book.

Only twb(tableau workbook) file we should able to downgrade, not twbx (tableau packaged work book).

  1. Open your twb file Notepad ++ Editor.
  2. Find your New version value – for example New version is 9.0 so find ‘9.0’ in file.
  3. Replace this ‘9.0’ with older version for example – downgrade to 8.3 then  replace 9.0 with 8.3
  4. Now we can open this work book with 8.3 and after that we can save it to twbx – Package workbook.Downgrage_Tableau

Hopefully this will save your time ……………

Connect to the Tableau PostgreSQL database and Create worksheet for Tableau server monitoring

Today we will connect database of PostgreSQL of Tableau and create work sheet base on that database.

Tableau will store every data for tableau server statistics in PostgreSQL.

It has Three databases.

  • Workgroup
  • Postgres
  • Workgroup_Test

It has Four Users.

  • Tableau – Readonly user
  • Readonly – Readonly
  • Tblwgadmin
  • Rail – for Ruby on rails

First we need to change password of Tableau user which we use to create work sheet from tableau PostgreSQL database.

Open command Prompt with Administrator privileged, then move to below path,

C:\Program Files\Tableau\Tableau Server\9.0\bin

Then write below command to change password of Tableau user of PostgreSQL database.

Syntax:

Tabadmin dbpass –u(for username option) user(name of user which we need to change) 123(password)

Example:

Tabadmin dbpass –u Tableau Test@1234

 

Use below command to restart Tableau server

Tabadmin Restart

Now password of Tableau user will change and we can connect with this user name and password.

Server – your server iPad dress or server name – example – 192.168.3.3

Username – Tableau

Password – Test@1234

Port – 8060

Database: workgroup

Service – nothing/blank (if you connect from pgadmin)

Now we need to give full Select query rights to the Tableau user so we can utilize each data of PostgreSQL database.

For that we need password of the tblwgAdmin user.

Go to below path – May be ProgramData is hidden.

C:\ProgramData\Tableau\Tableau Server\config

You will get – tabsvc.yml file in this password of tblwgAdmin and every user you will find. Find pgsql.adminpassword: in this file you will get password.

Now with this password you will connect to the pgadmin utility or use psql command prompt utility to run below script.

GRANT SELECT ON ALL TABLES IN SCHEMA public TO tableau

After running this script we got rights of select query for all the tables in PostgreSQL.

Now download PostgreSQL provider for the tableau and start creating worksheet.

I hope this post will be useful for you, I use it very soundly and make very nice worksheet which help me out to find out server related movement and network activity on server.

 

Azure Machine Learning

Today, I will try explain concepts of machine learning in simple language so by reading we can learn machine learning. Below are some important terms on basis we can make decision that our Predictive result or selection of Algorithm is good or not.

Mean absolute error(MAE)

The mean absolute error (MAE) is a quantity used to measure how close forecasts or
predictions are to the eventual outcomes. It is calculated as the average of the absolute difference between the actual values and the predicted values.

Root mean squared error (RMSE)

The root mean squared error (RMSE) is calculated by taking the square root of the
average of the square of all the errors (which is the difference between the predicted and actual value).

Relative absolute error (RAE)

The relative absolute error (RAE) is the average of the absolute error relative to the
average of the absolute difference of the mean of the actual value and actual values.

Relative squared error (RSE)

The relative squared error (RSE) is the average of the squared difference of the predicted value and the actual value relative to the average of the squared difference, average of the actual value and actual values.

Coefficient of determination

The coefficient of determination R2 summarizes the explanatory power of the regression model. If the regression model is perfect, though not practical, R2 is 1. The coefficient of determination can also be interpreted as the percent of the data that fits in the model.

For example, if R2 = 0.7950, then 79 percent of the total variation in y can
be explained by the linear relationship between features and y, the response variable (or the target variable).
So, for your model, the closer R2 is to 1, the better it is. For all other the error statistics,the less the value, the better it is.

How and what Connection does Azure ML supported ?

CSV

with first row as Column. – Coma separated values

nh.CSV

with out first row as column

TSV

TSV is a file extension for a tab-delimited file used with spreadsheet software. TSV stands for Tab Separated Values. TSV files are used for raw data and can be imported into and exported from spreadsheet software.

nh.Tsv

Tab delimited file without first row as column name.

ARFF

An ARFF (Attribute-Relation File Format) file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank (the new additions are string attributes, date attributes, and sparse instances).

Plaintext (txt) – text file.

ZipFile(.Zip) – Zip file.

SVMLIght – Support vector machine file.

Rdata – R work space file.

Reader – Hive query , Odata – datafeed provider Azure Database, Azure Blob, Http Url.

Thanks, Hopefully this post is useful in new learning of Azure ML.

Useful link to download Cheat sheet which is very nice and useful to understand which algorithm support by AZURE ML.

https://azure.microsoft.com/en-in/documentation/articles/machine-learning-algorithm-cheat-sheet/

 

Learn querying, MongoDB with comparing SQL query

DBMS MONGODB
 Select * from Data db.Data.Find() (Show all records with all columns.)
Select year,importer,exporter,quantity from Data db.Data.find({},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(In Mongodb field name must be case sensitive – you can no write Year in place of year.)

Select year,importer,exporter,quantity from Data where quantity > 50 db.Data.find({quantity:{$gt:50}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(where – quantity greater than > 50)

Select year,importer,exporter,quantity from table where quantity >= 50 db.Data.find({quantity:{$gte:50}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(Where – quantity >= 50.)

Select year,importer,exporter,quantity from Data where quantity = 200 db.Data.find({quantity:200},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(where – quantity = 200)

Select year,importer,exporter,quantity from table where quantity > 50 db.Data.find({quantity:{$lt:50}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(where – quantity greater than < 50)

Select year,importer,exporter,quantity from Data where quantity <= 50 db.Data.find({quantity:{$lte:50}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(Where – quantity <= 50.)

Select year,importer,exporter,quantity from Data where quantity is null db.Data.find({quantity:null},{year:true,importer:true,exporter:true,quantity:true,_id:false})

(Null check and null compare.)

Select year,importer,exporter,quantity from Data where quantity > 100 order by importer db.Data.find({quantity:{$gt:100}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

.sort({importer:1})

(Order by ascending  –  sort({column:1}))

Select year,importer,exporter,quantity from Data where quantity > 100 order by importer desc db.Data.find({quantity:{$gt:100}},{year:true,importer:true,exporter:true,quantity:true,_id:false})

.sort({importer:-1})

(Order by descending – sort({column:-1}))

db.Data.aggregate({ $group : { _id:”$importer” , Total : { $sum : 1} , Totalquantity : { $sum : “$quantity” }}}) (Group by query)

How to Do Replication in MongoDB ?

Replication: A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.

We will do replication in single server, for that we need to create three different folders for Mongodb dbpath setting and start three mongod processes on three different ports in three different command prompts.

below List of commands that need to run.

mongod –port 27017 –dbpath D:\MongoDatabase\data –replSet rep

mongod –port 27018 –dbpath D:\MongoDatabase\data1 –replSet rep

mongod –port 27019 –dbpath D:\MongoDatabase\data2 –replSet rep

Then connect to Mongo with below command.

mongo –port 27017

Use below command to initiate the replication.

rs.initiate()

Then we need to add instance in replication, for that use below commands one by one.

rs.add(“<hostname>:Port”)

rs.add(“Ahmcpu1392:27018”)

rs.add(“Ahmcpu1392:27019”)

then you can see difference in mongo prompt.

rep:PRIMARY>

That means you have connected to primary node of replication, that’s the confirmation of replication has been done successfully, then use below command

rs.status()

In order to enable secondary’s to read the data on the primary we need to run the following command after connecting as client to each replica set secondary.

mongo –port 27018

> rs.slaveOk()

mongo –port 27019

> rs.slaveOk()

MongoDB Useful link to know more about replication in Mongodb.

http://docs.mongodb.org/v2.2/reference/replica-commands/

http://docs.mongodb.org/manual/core/replica-set-members/

Ranking Functions in Microsoft SQLSERVER

Today, We will see example of different ranking functions supported in Sqlserver.

Use below script to create table.

CREATE TABLE dbo.CustomerData
(OrderId INT IDENTITY(1,1) NOT NULL,CustomerId INT,TotalAmt decimal (10,2))

Populate data in table.

INSERT dbo.CustomerData (CustomerId, TotalAmt)
SELECT 1, 100
UNION ALL
SELECT 1, 2000
UNION ALL
SELECT 1, 560
UNION ALL
SELECT 1, 1000
UNION ALL
SELECT 1, 1000
UNION ALL
SELECT 2, 1000
UNION ALL
SELECT 2, 2500
UNION ALL
SELECT 2, 500
UNION ALL
SELECT 2, 1500
UNION ALL
SELECT 3, 1500

Below are list of Ranking function.

1. ROW_NUMBER – It will generate Row Number base on order by and partition by  provided in over clause.

SELECT *,ROW_NUMBER() OVER (ORDER BY TotalAmt DESC) AS ROWNUMBER
FROM dbo.CustomerData
ORDER BY TotalAmt DESC

Row_Number

 

 

 

 

 

 

 

SELECT *,ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS ROWNUMBERPartitionby
FROM dbo.CustomerData
ORDER BY TotalAmt DESC

Row_numberPartitionby

 

 

 

 

 

 

 

 

2. RANK – It will Generate rank base on order by and partition by provided in over clause.

Partition by – What ever column written in partition by SQL will logically create partition on that and after on that partition Rank function will apply. Rank will give same rank if value is same for column provided in over clause in order by.

SELECT *,RANK() OVER (ORDER BY TotalAmt DESC) AS RANKF,
RANK() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS RANKFPartitionby
FROM dbo.CustomerData
ORDER BY TotalAmt DESC

rank

 

 

 

 

 

 

 

3. DENSE_RANK – It will Generate rank base on order by and partition by provided in over clause.

Partition by – What ever column written in partition by SQL will logically create partition on that and after on that partition DENSE_Rank function will apply. DENSE_Rank will give same rank if value is same for column provided in over clause in order by, and next number in sequence will generate base on last number use in sequence, you can see in below example.

SELECT *,DENSE_RANK() OVER (ORDER BY TotalAmt DESC) AS DENSERANK,
DENSE_RANK() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS DENSERANKPartitionby
FROM dbo.CustomerData
ORDER BY TotalAmt DESC

Dense-rank

 

 

 

 

 

 

 

Below is query by which you compare all the ranking function.

SELECT *,

ROW_NUMBER() OVER (ORDER BY TotalAmt DESC) AS ROWNUMBER,

ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS ROWNUMBERPartitionby,

RANK() OVER (ORDER BY TotalAmt DESC) AS RANKF,

RANK() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS RANKFPartitionby,

DENSE_RANK() OVER (ORDER BY TotalAmt DESC) AS DENSERANK,

DENSE_RANK() OVER (PARTITION BY CustomerId ORDER BY TotalAmt DESC) AS DENSERANKPartitionby

FROM dbo.CustomerData

ORDER BY TotalAmt DESC

For partition by comparison see customerid and base on that see all rank function output.

Untitled