Topic:Cluster Analysis
Audience: MCA + IT III Year JPNCE Mahboobnagar 18-04-2008
What is Cluster Analysis?
Cluster : Collection of data objects
(Intraclass similarity) - Objects are similar to objects in same cluster
(Interclass dissimilarity) - Objects are dissimilar to objects in other clusters
Cluster analysis
Statistical method for grouping a set of data objects into clusters
A good clustering method produces high quality clusters with high intraclass similarity and low interclass similarity
Clustering is unsupervised classification
Examples of Clustering Applications
Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs
Insurance: Identifying groups of motor insurance policy holders with a high average claim cost
City-planning: Identifying groups of houses according to their house type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults
Data Representation
Data matrix (two mode)
N objects with p attributes
Dissimilarity matrix (one mode)
d(i,j) : dissimilarity
between i and j
Types of Data in Cluster Analysis
Interval-Scaled Variables
Binary Variables
Nominal, Ordinal, and Ratio-Scaled Variables
Variables of Mixed Types
Interval-Scaled Variables
Continuous measurements of a roughly linear scale
E.g. weight, height, temperature, etc.
Using Interval-Scaled Values
Step 1: Standardize the data
To ensure they all have equal weight
To match up different scales into a uniform, single scale
Not always needed! Sometimes we require unequal weights for an attribute
Step 2: Compute dissimilarity between records
Use Euclidean, Manhattan or Minkowski distance
Data Types and Distance Metrics
Distances are normally used to measure the similarity or dissimilarity between two data objects
Minkowski distance:
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer
Data Types and Distance Metrics (Cont’d)
If q = 1, d is Manhattan distance
If q = 2, d is Euclidean distance
Data Types and Distance Metrics (Cont’d)
Properties
d(i,j) 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) d(i,k) + d(k,j)
Can also use weighted distance, or other dissimilarity measures.
Binary Attributes
A contingency table for binary data
Simple matching coefficient (if the binary attribute is symmetric):
Jaccard coefficient (if the binary attribute is asymmetric):
Dissimilarity between
Binary Attributes
Example
Nominal Attributes
A generalization of the binary attribute in that it can take more than 2 states, e.g., red, yellow, blue, green
Method 1: Simple matching
m: # of attributes that are same for both records, p: total # of attributes
Method 2: rewrite the database and create a new binary attribute for each of the m states
For an object with color yellow, the yellow attribute is set to 1, while the remaining attributes are set to 0.
Ordinal Attributes
An ordinal attribute can be discrete or continuous
Order is important (ex.rank)
Can be treated like interval-scaled
replacing xif by their rank
map the range of each variable onto [0, 1] by replacing i-th object in the f-th attribute by
compute the dissimilarity using methods for interval-scaled attributes
Ratio-Scaled Attributes
Ratio-scaled attribute: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt
Methods:
treat them like interval-scaled attributes — not a good choice because scales may be distorted
apply logarithmic transformation
yif = log(xif)
treat them as continuous ordinal data and treat their rank as interval-scaled.
Attributes of Mixed Types
A database may contain all the six types of attributes
symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.
Use a weighted formula to combine their effects.
f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 o.w.
f is interval-based: use the normalized distance
f is ordinal or ratio-scaled
compute ranks rif and
and treat zif as interval-scaled
Data Mining Concepts and Techniques
Data Mining Concepts and Techniques
Introduction
Clustering is an unsupervised method of data analysis
Data instances grouped according to some notion of similarity
Access only to the set of features describing each object
No information as to where each instance should be placed with partition
However there might be background knowledge about the domain or data set that could be useful to algorithm
In this paper the authors try to integrate this background knowledge into clustering algorithms.
K-means Clustering
Used to partition a data set into k groups
Group instances based on attributes into k groups
High intra-cluster similarity; Low inter-cluster similarity
Cluster similarity is measured in regards to the mean value of objects in the cluster.
How does K-means work ?
First, select K random instances from the data – initial cluster centers
Second, each instance is assigned to its closest (most similar) cluster center
Third, each cluster center is updated to the mean of its constituent instances
Repeat steps two and three till there is no further change in assignment of instances to clusters
Constrained K-means Clustering
Two pair-wise constraints
Must-link: constraints which specify that two instances have to be in the same cluster
Cannot-link: constraints which specify that two instances must not be placed in the same cluster
When using a set of constraints we have to take the transitive closure
Constraints may be derived from
Partially labeled data
Background knowledge about the domain or data set
Constrained Algorithm
First, select K random instances from the data – initial cluster centers
Second, each instance is assigned to its closest (most similar) cluster center such that VIOLATE-CONSTRAINT(I, K, M, C) is false. If no such cluster exists , fail
Third, each cluster center is updated to the mean of its constituent instances
Repeat steps two and three till there is no further change in assignment of instances to clusters
VIOLATE-CONSTRAINT
instance I, cluster K,
must-link constraint M, cannot-link constraint C
For each (i, i=) in M: if i= is not in K, return true.
For each (i, i≠) in C : if i≠ is in K, return true
Otherwise return false
Experimental Results on
GPS Lane Finding
Large database of digital road maps available
These maps contain only coarse information about the location of the road
By refining maps down to the lane level we can enable a host of more sophisticated applications such as lane departure detection
Approach
Based on the observation that drivers tend to drive within lane boundaries
Lanes should correspond to “densely traveled” regions in contrast to the lane boundaries
Possible to collect data about the location of cars and then cluster that data to automatically determine where the individual lanes are located
GPS Lane Finding (cont’d)
Collect data about the location of cars as they drive along a given road
Collect data once per second from several drivers using GPS receivers affixed to top of their vehicles
Each data instance has two features:
1. Distance along the road segment
2. Perpendicular offset from the road centerline
For evaluation purposes drivers were asked to indicate which lane they occupied and any lane changes
GPS Lane Finding (cont’d)
For the problem of automatic lane detection,
Two domain-specific heuristics for generating constraints
Trace contiguity means that, in the absence of lane changes, all of the points generated from the same vehicle in a single pass over a road segment should end up in the same lane.
Maximum separation refers to a limit on how far apart two points can be (perpendicular to the centerline) while still being in the same lane. If two points are separated by at least four meters, then we generate a constraint that will prevent those two points from being placed in the same cluster.
To better analyze performance in the domain, authors modified the cluster center representation
GPS Lane Finding (cont’d)
Conclusion
Measurable improvement in accuracy
The use of constraints while clustering means that, unlike the regular k-means algorithm, the assignment of instances to clusters can be order-sensitive.
If a poor decision is made early on, the algorithm may later encounter an instance i that has no possible valid cluster
Ideally, the algorithm would be able to backtrack, rearranging some of the instances so that i could then be validly assigned to a cluster.
Could be extended to hierarchical algorithms
Friday, April 18, 2008
sockets!!!
topic: sockets
audience: mca students OSMANIA PG CAMPUS MAHBOOBNAGAR 18-04-2008 time : 2:30 to 4:30 pm
Started discussion with addressing issues, physical address(ethernet address), ip address(classes, header format, functionalities), port addresses(well known, ephemeral, registered) , defn of socket(ip+port), client server paradigm, connection oriented vs connectionles services then explained the program connection-oriented concurrent server and client. then discussed socket system calls viz; socket, bind, listen, accept, connect, socket pair. byte ordering routines(htonl,htons,ntohl,ntohs),bzero function, unix network i/o apis.
audience: mca students OSMANIA PG CAMPUS MAHBOOBNAGAR 18-04-2008 time : 2:30 to 4:30 pm
Started discussion with addressing issues, physical address(ethernet address), ip address(classes, header format, functionalities), port addresses(well known, ephemeral, registered) , defn of socket(ip+port), client server paradigm, connection oriented vs connectionles services then explained the program connection-oriented concurrent server and client. then discussed socket system calls viz; socket, bind, listen, accept, connect, socket pair. byte ordering routines(htonl,htons,ntohl,ntohs),bzero function, unix network i/o apis.
Thursday, March 13, 2008
UNIX NETWORK PROGRAMS
/* Some of the files and definitions to be used in the socket programs*/
#include
#include
#include
#include
#include
#include
#include
#define SERV_UDP_PORT 6600
#define SERV_TCP_PORT 5000
#define SERV_HOST_ADDRESS "172.16.0.1"
#define MAXLINE 80
#define UNIXSTR_PATH "unix_sock"
#define UNIXDG_PATH "./unix_srv_dg"
#define UNIXDG_TMP "./dg.xxxxxx"
#define MAXMESG 2048
#define MAX 512
char *pname;
int readn(register int fd,register char *ptr,register int nbytes)
{
int nleft,nread;
nleft=nbytes;
while(nleft>0)
{
nread=read(fd,ptr,nleft);
if(nread<0)
return (nread);
else
if(nread==0)
break;
nleft-=nread;
ptr+=nread;
}
return (nbytes-nleft);
}
int writen(register int fd,register char *ptr,register int nbytes)
{
int nleft,nwritten;
nleft=nbytes;
while(nleft>0)
{
nwritten=write(fd,ptr,nleft);
if(nwritten<0)
return (nwritten);
nleft-=nwritten;;
ptr+=nwritten;
}
return (nbytes-nleft);
}
int readline(register int fd,register char *ptr, register int maxlen)
{
int n,rc;char c;
for(n=1;n {if((rc=read(fd,&c,1))==1){*ptr++=c;if(c=='\n') break;
}
else if(rc==0) {
if(n==1)return 0;
else break;}
else
return -1;}*ptr='\0';return (n);}
str_cli(register FILE *fp,register int sockfd)
{
int n;
char sendline[MAXLINE],recvline[MAXLINE+1];
printf("\nEnter a message: ");
fflush(stdout);
while(1)
{
fgets(sendline,MAXLINE,fp);
if(strlen(sendline)==1)
break;
n=strlen(sendline);
printf("\nClient is sending to server: %s",sendline);
if(writen(sockfd,sendline,n)!=n)
perror("str_cli:writen error from socket");
n=readline(sockfd,recvline,MAXLINE);
printf("Client received from server: %s",recvline);
printf("\nEnter a message to continue or press ENTER to terminate: ");
fflush(stdout);
if(n<0)
perror("str_cli:readline error");
// recvline[n]='\0';
}
if(ferror(fp))
perror("str_cli:error reading file");
}
str_echo(int sockfd)
{
int n;
char line[MAXLINE];
for(; ;)
{
n=readline(sockfd,line,MAXLINE);
printf("\nServer received from client: %s",line);
if(n==0)
return;
else
if(n<0)
perror("str_echo: readline error");
printf("Server echoing back to client:%s",line);
fflush(stdout);
if(writen(sockfd,line,n)!=n)
perror("str_echo:writen error");
}
}
dg_cli(FILE *fp,int sockfd,struct sockaddr *pserv_addr,int servlen)
{
int n;
char sendline[MAXLINE],recvline[MAXLINE+1];
printf("\nEnter a message to send to server: ");
while(fgets(sendline,MAXLINE,fp))
{
if((n=strlen(sendline))==1) exit(0);
printf("\nUDP client is sending to server: %s",sendline);
if(sendto(sockfd,sendline,n,0,pserv_addr,servlen)!=n)
perror("dg_cli:sendto error");
n=recvfrom(sockfd,recvline,MAXLINE,0,(struct sockaddr *)0,(int *)0);
if(n<0)
perror("dg_cli:recvfrom error");
recvline[n]='\0';
printf("UDP client received from: %s",recvline);
fflush(stdout);
printf("\nEnter thenext message to send or ENTER to terminate:");
}
if(ferror(fp))
perror("dg_cli:error reading file");
}
dg_echo(int sockfd,struct sockaddr * pcli_addr,int maxclilen)
{
int n,clilen;
char mesg[MAXMESG];
for( ; ;)
{
clilen=maxclilen;
n=recvfrom(sockfd,mesg,MAXMESG,0,pcli_addr,&clilen);
if(n<0)
perror("dg_echo:recvfrom error");
mesg[n]='\0';
printf("\nUDP server received from client and echoing: %s",mesg);
fflush(stdout);
if(sendto(sockfd,mesg,n,0,pcli_addr,clilen)!=n)
perror("dg_echo:sendto error");
}
}
#include
#include
#include
#include
#include
#include
#include
#define SERV_UDP_PORT 6600
#define SERV_TCP_PORT 5000
#define SERV_HOST_ADDRESS "172.16.0.1"
#define MAXLINE 80
#define UNIXSTR_PATH "unix_sock"
#define UNIXDG_PATH "./unix_srv_dg"
#define UNIXDG_TMP "./dg.xxxxxx"
#define MAXMESG 2048
#define MAX 512
char *pname;
int readn(register int fd,register char *ptr,register int nbytes)
{
int nleft,nread;
nleft=nbytes;
while(nleft>0)
{
nread=read(fd,ptr,nleft);
if(nread<0)
return (nread);
else
if(nread==0)
break;
nleft-=nread;
ptr+=nread;
}
return (nbytes-nleft);
}
int writen(register int fd,register char *ptr,register int nbytes)
{
int nleft,nwritten;
nleft=nbytes;
while(nleft>0)
{
nwritten=write(fd,ptr,nleft);
if(nwritten<0)
return (nwritten);
nleft-=nwritten;;
ptr+=nwritten;
}
return (nbytes-nleft);
}
int readline(register int fd,register char *ptr, register int maxlen)
{
int n,rc;char c;
for(n=1;n
}
else if(rc==0) {
if(n==1)return 0;
else break;}
else
return -1;}*ptr='\0';return (n);}
str_cli(register FILE *fp,register int sockfd)
{
int n;
char sendline[MAXLINE],recvline[MAXLINE+1];
printf("\nEnter a message: ");
fflush(stdout);
while(1)
{
fgets(sendline,MAXLINE,fp);
if(strlen(sendline)==1)
break;
n=strlen(sendline);
printf("\nClient is sending to server: %s",sendline);
if(writen(sockfd,sendline,n)!=n)
perror("str_cli:writen error from socket");
n=readline(sockfd,recvline,MAXLINE);
printf("Client received from server: %s",recvline);
printf("\nEnter a message to continue or press ENTER to terminate: ");
fflush(stdout);
if(n<0)
perror("str_cli:readline error");
// recvline[n]='\0';
}
if(ferror(fp))
perror("str_cli:error reading file");
}
str_echo(int sockfd)
{
int n;
char line[MAXLINE];
for(; ;)
{
n=readline(sockfd,line,MAXLINE);
printf("\nServer received from client: %s",line);
if(n==0)
return;
else
if(n<0)
perror("str_echo: readline error");
printf("Server echoing back to client:%s",line);
fflush(stdout);
if(writen(sockfd,line,n)!=n)
perror("str_echo:writen error");
}
}
dg_cli(FILE *fp,int sockfd,struct sockaddr *pserv_addr,int servlen)
{
int n;
char sendline[MAXLINE],recvline[MAXLINE+1];
printf("\nEnter a message to send to server: ");
while(fgets(sendline,MAXLINE,fp))
{
if((n=strlen(sendline))==1) exit(0);
printf("\nUDP client is sending to server: %s",sendline);
if(sendto(sockfd,sendline,n,0,pserv_addr,servlen)!=n)
perror("dg_cli:sendto error");
n=recvfrom(sockfd,recvline,MAXLINE,0,(struct sockaddr *)0,(int *)0);
if(n<0)
perror("dg_cli:recvfrom error");
recvline[n]='\0';
printf("UDP client received from: %s",recvline);
fflush(stdout);
printf("\nEnter thenext message to send or ENTER to terminate:");
}
if(ferror(fp))
perror("dg_cli:error reading file");
}
dg_echo(int sockfd,struct sockaddr * pcli_addr,int maxclilen)
{
int n,clilen;
char mesg[MAXMESG];
for( ; ;)
{
clilen=maxclilen;
n=recvfrom(sockfd,mesg,MAXMESG,0,pcli_addr,&clilen);
if(n<0)
perror("dg_echo:recvfrom error");
mesg[n]='\0';
printf("\nUDP server received from client and echoing: %s",mesg);
fflush(stdout);
if(sendto(sockfd,mesg,n,0,pcli_addr,clilen)!=n)
perror("dg_echo:sendto error");
}
}
UNIX NETWORK PROGRAMS
/* CONNECTION ORIENTED(UNIX) CONCURRENT CLIENT */
#include
#include
#include
#include
#include
#define max 80
main(int argc,char *argv[])
{
int sfd,s,i,n,sl;
char buff1[max],buff2[max];
struct sockaddr_un sa;
//pname =argv[0];
sfd=socket(AF_UNIX,SOCK_STREAM,0);
if(sfd<0)
{
printf("CLIENT:SOCKET ERROR");
exit(0);
}
//perror("server:cannot open stream socket");
bzero((char*)&sa,sizeof(sa));
sa.sun_family=AF_UNIX;
strcpy(sa.sun_path,argv[1]);
sl=strlen(sa.sun_path)+sizeof(sa.sun_family);
s=connect(sfd,(struct sockaddr *)&sa,sl);
if(s<0)
{
printf("CLIENT:CANNOT CONNECT");
exit(0);
}
for(i=0;i<10;i++)
{
write(1,"\nENTER MESSAGE:",15);
n=read(0,buff1,20);
write(1,"\nCLIENT HAS SENT:",17);
write(1,buff1,n);
send(sfd,buff1,n,0);
n=recv(sfd,buff2,20,0);
write(1,"\nCLIENT HAS RECEIVED FROM SERVER:",33);
write(1,buff2,n);
}
close(sfd);
exit(0);
}
/* CONNECTION ORIENTED(UNIX) CONCURRENT SERVER*/
#include
#include
#include
#include
#include
#define max 80
main(int argc,char *argv[])
{
int sfd,nsfd,pid,n,i,sl,cl;
char buff[max];
struct sockaddr_un ca,sa;
sfd=socket(AF_UNIX,SOCK_STREAM,0);
if(sfd<0)
{
printf("SERVER:SOCKET ERROR");
exit(0);
}
bzero((char *)&sa,sizeof(sa));
sa.sun_family=AF_UNIX;
strcpy(sa.sun_path,argv[1]);
sl=strlen(sa.sun_path)+sizeof(sa.sun_family);
if(bind(sfd,(struct sockaddr *)&sa,sl)<0)
{
printf("SERVER:BIND FAILURE");
exit(0);
}
listen(sfd,5);
for(;;)
{
write(1,"\nSERVER:WAITING....",19);
fflush(stdout);
cl=sizeof(ca);
nsfd=accept(sfd,(struct sockaddr*)&ca,&cl);
if(nsfd<0)
{
printf("SERVER:ACCEPT ERROR");
exit(0);
}
pid=fork();
if(pid==0)
{
close(sfd);
for(i=0;i<10;i++)
{
n=recv(nsfd,buff,max,0);
buff[n]='\0';
write(1,"\nMESSAGE RECEIVED FROM CLIENT:",32);
write(1,buff,n);
send(nsfd,buff,n,0);
}
}
close(nsfd);
exit(0);
}
}
/* Concurrent TCP client program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr;
pname=argv[0];
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(SERV_TCP_PORT);
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("client:socket error");
if(connect(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("client:cant connect to error");
exit(1);
}
for(;;)
{
str_cli(stdin,sockfd);
close(sockfd);
exit(0);
}
/* Concurrent TCP server program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid;
struct sockaddr_in cli_addr,serv_addr;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("server:cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=htonl(INADDR_ANY);
serv_addr.sin_port=htons(SERV_TCP_PORT);
if(bind(sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,5);
for( ; ;)
{
clilen=sizeof(cli_addr);
printf("\nServer is waiting for connection requests: ");
fflush(stdout);
newsockfd=accept(sockfd,(struct sockaddr *) &cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
printf("\nConnection established and communicating with client.....");
fflush(stdout);
if((childpid=fork())<0)
perror("server:fork error");
else if(childpid==0)
{
close(sockfd);
str_echo(newsockfd);
exit(0);
}
close(newsockfd);
}
}
/*Concorent UDP client program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr,cli_addr;
pname=argv[0];
bzero((char*)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(SERV_UDP_PORT);
if((sockfd=socket(AF_INET,SOCK_DGRAM,0))<0)
perror("client:cant open datagram socket");
bzero((char *)&cli_addr,sizeof(cli_addr));
cli_addr.sin_family=AF_INET;
cli_addr.sin_addr.s_addr=htonl(INADDR_ANY);
cli_addr.sin_port=htons(0);
if(bind(sockfd,(struct sockaddr*)&cli_addr,sizeof(cli_addr))<0)
{
perror("client:cant bind local address");
exit(1);
}
dg_cli(stdin,sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr));
close(sockfd);
exit(0);
}
/* Concurent UDP Server program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr,cli_addr;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_DGRAM,0))<0)
{
perror("server:cant open datargram socket");
exit(1);
}
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=htonl(INADDR_ANY);
serv_addr.sin_port=htons(SERV_UDP_PORT);
if(bind(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
printf("\nUDP server is waiting........");
fflush(stdout);
dg_echo(sockfd,(struct sockaddr *)&cli_addr,sizeof(cli_addr));
}
#include
#include
#include
#include
#include
#define max 80
main(int argc,char *argv[])
{
int sfd,s,i,n,sl;
char buff1[max],buff2[max];
struct sockaddr_un sa;
//pname =argv[0];
sfd=socket(AF_UNIX,SOCK_STREAM,0);
if(sfd<0)
{
printf("CLIENT:SOCKET ERROR");
exit(0);
}
//perror("server:cannot open stream socket");
bzero((char*)&sa,sizeof(sa));
sa.sun_family=AF_UNIX;
strcpy(sa.sun_path,argv[1]);
sl=strlen(sa.sun_path)+sizeof(sa.sun_family);
s=connect(sfd,(struct sockaddr *)&sa,sl);
if(s<0)
{
printf("CLIENT:CANNOT CONNECT");
exit(0);
}
for(i=0;i<10;i++)
{
write(1,"\nENTER MESSAGE:",15);
n=read(0,buff1,20);
write(1,"\nCLIENT HAS SENT:",17);
write(1,buff1,n);
send(sfd,buff1,n,0);
n=recv(sfd,buff2,20,0);
write(1,"\nCLIENT HAS RECEIVED FROM SERVER:",33);
write(1,buff2,n);
}
close(sfd);
exit(0);
}
/* CONNECTION ORIENTED(UNIX) CONCURRENT SERVER*/
#include
#include
#include
#include
#include
#define max 80
main(int argc,char *argv[])
{
int sfd,nsfd,pid,n,i,sl,cl;
char buff[max];
struct sockaddr_un ca,sa;
sfd=socket(AF_UNIX,SOCK_STREAM,0);
if(sfd<0)
{
printf("SERVER:SOCKET ERROR");
exit(0);
}
bzero((char *)&sa,sizeof(sa));
sa.sun_family=AF_UNIX;
strcpy(sa.sun_path,argv[1]);
sl=strlen(sa.sun_path)+sizeof(sa.sun_family);
if(bind(sfd,(struct sockaddr *)&sa,sl)<0)
{
printf("SERVER:BIND FAILURE");
exit(0);
}
listen(sfd,5);
for(;;)
{
write(1,"\nSERVER:WAITING....",19);
fflush(stdout);
cl=sizeof(ca);
nsfd=accept(sfd,(struct sockaddr*)&ca,&cl);
if(nsfd<0)
{
printf("SERVER:ACCEPT ERROR");
exit(0);
}
pid=fork();
if(pid==0)
{
close(sfd);
for(i=0;i<10;i++)
{
n=recv(nsfd,buff,max,0);
buff[n]='\0';
write(1,"\nMESSAGE RECEIVED FROM CLIENT:",32);
write(1,buff,n);
send(nsfd,buff,n,0);
}
}
close(nsfd);
exit(0);
}
}
/* Concurrent TCP client program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr;
pname=argv[0];
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(SERV_TCP_PORT);
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("client:socket error");
if(connect(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("client:cant connect to error");
exit(1);
}
for(;;)
{
str_cli(stdin,sockfd);
close(sockfd);
exit(0);
}
/* Concurrent TCP server program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid;
struct sockaddr_in cli_addr,serv_addr;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("server:cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=htonl(INADDR_ANY);
serv_addr.sin_port=htons(SERV_TCP_PORT);
if(bind(sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,5);
for( ; ;)
{
clilen=sizeof(cli_addr);
printf("\nServer is waiting for connection requests: ");
fflush(stdout);
newsockfd=accept(sockfd,(struct sockaddr *) &cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
printf("\nConnection established and communicating with client.....");
fflush(stdout);
if((childpid=fork())<0)
perror("server:fork error");
else if(childpid==0)
{
close(sockfd);
str_echo(newsockfd);
exit(0);
}
close(newsockfd);
}
}
/*Concorent UDP client program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr,cli_addr;
pname=argv[0];
bzero((char*)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(SERV_UDP_PORT);
if((sockfd=socket(AF_INET,SOCK_DGRAM,0))<0)
perror("client:cant open datagram socket");
bzero((char *)&cli_addr,sizeof(cli_addr));
cli_addr.sin_family=AF_INET;
cli_addr.sin_addr.s_addr=htonl(INADDR_ANY);
cli_addr.sin_port=htons(0);
if(bind(sockfd,(struct sockaddr*)&cli_addr,sizeof(cli_addr))<0)
{
perror("client:cant bind local address");
exit(1);
}
dg_cli(stdin,sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr));
close(sockfd);
exit(0);
}
/* Concurent UDP Server program */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr,cli_addr;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_DGRAM,0))<0)
{
perror("server:cant open datargram socket");
exit(1);
}
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=htonl(INADDR_ANY);
serv_addr.sin_port=htons(SERV_UDP_PORT);
if(bind(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
printf("\nUDP server is waiting........");
fflush(stdout);
dg_echo(sockfd,(struct sockaddr *)&cli_addr,sizeof(cli_addr));
}
UNIX NETWORK PROGRAMS
MEANT FOR MCA II YEAR TELANGANA UNIVERSITY NIZAMABAD
//iterative unix client
#include "files.h"
int main(int argc,char *argv[])
{
int sockfd,servlen;
struct sockaddr_un serv_addr;
bzero((char *) &serv_addr,sizeof(serv_addr));
serv_addr.sun_family=AF_UNIX;
strcpy(serv_addr.sun_path,UNIXSTR_PATH);
servlen=strlen(serv_addr.sun_path) + sizeof(serv_addr.sun_family);
/* Open a socket */
if(sockfd=(socket(AF_UNIX,SOCK_STREAM,0))<0)
perror("Client: Can't open socket");
/* Connect to the server */
if(connect(sockfd,(struct sockaddr *) &serv_addr,servlen)<0)
perror("Client:Can't connect to server");
str_cli(stdin,sockfd);
exit(0);
}
/* Example of Iterative server using UNIX domain stream protocol.*/
#include "files.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid,servlen;
struct sockaddr_un cli_addr,serv_addr;
pname=argv[0];
if((sockfd=socket(AF_UNIX,SOCK_STREAM,0))<0)
perror("server: cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sun_family=AF_UNIX;
strcpy(serv_addr.sun_path,UNIXSTR_PATH);
servlen=strlen(serv_addr.sun_path)+sizeof(serv_addr.sun_family);
if((bind(sockfd,(struct sockaddr*)&serv_addr,servlen))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,5);
for( ; ;)
{
clilen=sizeof(cli_addr);
newsockfd=accept(sockfd,(struct sockaddr*)&cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
str_echo(newsockfd);
close(newsockfd);
}
printf("\nServer is exiting.....");
}
//iterative unix client
#include "files.h"
int main(int argc,char *argv[])
{
int sockfd,servlen;
struct sockaddr_un serv_addr;
bzero((char *) &serv_addr,sizeof(serv_addr));
serv_addr.sun_family=AF_UNIX;
strcpy(serv_addr.sun_path,UNIXSTR_PATH);
servlen=strlen(serv_addr.sun_path) + sizeof(serv_addr.sun_family);
/* Open a socket */
if(sockfd=(socket(AF_UNIX,SOCK_STREAM,0))<0)
perror("Client: Can't open socket");
/* Connect to the server */
if(connect(sockfd,(struct sockaddr *) &serv_addr,servlen)<0)
perror("Client:Can't connect to server");
str_cli(stdin,sockfd);
exit(0);
}
/* Example of Iterative server using UNIX domain stream protocol.*/
#include "files.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid,servlen;
struct sockaddr_un cli_addr,serv_addr;
pname=argv[0];
if((sockfd=socket(AF_UNIX,SOCK_STREAM,0))<0)
perror("server: cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sun_family=AF_UNIX;
strcpy(serv_addr.sun_path,UNIXSTR_PATH);
servlen=strlen(serv_addr.sun_path)+sizeof(serv_addr.sun_family);
if((bind(sockfd,(struct sockaddr*)&serv_addr,servlen))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,5);
for( ; ;)
{
clilen=sizeof(cli_addr);
newsockfd=accept(sockfd,(struct sockaddr*)&cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
str_echo(newsockfd);
close(newsockfd);
}
printf("\nServer is exiting.....");
}
SOCKETS
MEANT FOR MCA II YEAR TELANGANA UNIVERSITY NIZAMABAD
WHAT IS SOCKET
•"sockets" are a way to speak to other programs using standard Unix file descriptors.
•Unix programs do any sort of I/O, they do it by reading or writing to a file descriptor.
•A file descriptor is simply an integer associated with an open file. But that file can be a network connection, a FIFO, a pipe, a terminal, a real on-the-disk file, or just about anything else. Everything in Unix is a file! So when you want to communicate with another program over the Internet, do it through a file descriptor.
•To get the file descriptor for network communication, make a call to the socket() system routine. It returns the socket descriptor, and communicate through it using the specialized send() and recv()socket calls.
• the normal read() and write() calls can also be used to communicate through the socket. but send() and recv() offer much greater control over data transmission.
INTERNET SOCKETS
•There are two types of Internet sockets? One is "Stream Sockets"; the other is "Datagram Sockets", which may be referred to as "SOCK_STREAM" and "SOCK_DGRAM", respectively.
•Datagram sockets are sometimes called "connectionless sockets".
STREAM SOCKETS
•Stream sockets are reliable two-way connected communication streams. If you output two items into the socket in the order "1, 2", they will arrive in the order "1, 2" at the opposite end. They will also be error-free.
•What uses stream sockets? the telnet application, uses stream sockets. All the characters you type need to arrive in the same order you type them, Also, web browsers use the HTTP protocol which uses stream sockets to get pages.
•How do stream sockets achieve this high level of data transmission quality? They use a protocol called "The Transmission Control Protocol", known as "TCP". TCP makes sure that data arrives sequentially and error-free.
DATA GRAM SOCKETS
•What about Datagram sockets? Why are they called connectionless? Why are they unreliable? Well, here are some facts: if you send a datagram, it may arrive. It may arrive out of order. If it arrives, the data within the packet will be error-free.
•Datagram sockets also use IP for routing, but they don't use TCP; they use the "User Datagram Protocol", or "UDP".
•Why are they connectionless? it's because you don't have to maintain an open connection as you do with stream sockets. You just build a packet, slap an IP header on it with destination information, and send it out. No connection needed.
•Datagram sockets are generally used either when a TCP stack is unavailable or when a few dropped packets here and there don't mean the end of the Universe.
• Sample applications of datagram sockets are : tftp, bootp, multiplayer games, streaming audio, video conferencing, etc.
•tftp and bootp are used to transfer binary applications from one host to another.
•tftp and similar programs have their own protocol on top of UDP. For example, the tftp protocol says that for each packet that gets sent, the recipient has to send back a packet that says, "I got it!" (an "ACK" packet.) If the sender of the original packet gets no reply in, say, five seconds, he'll re-transmit the packet until he finally gets an ACK. This acknowledgment procedure is very important when implementing reliable SOCK_DGRAM applications.
DATA ENCAPSULATION
•how networks really work, and to show some examples of how SOCK_DGRAM packets are built.
Data Encapsulation.
•Data Encapsulation, it says this: a packet is born, the packet is wrapped ("encapsulated") in a header (and rarely a footer) by the first protocol (say, the TFTP protocol), then the whole thing (TFTP header included) is encapsulated again by the next protocol (say, UDP), then again by the next (IP), then again by the final protocol on the hardware (physical) layer (say, Ethernet).
•When another computer receives the packet, the hardware strips the Ethernet header, the kernel strips the IP and UDP headers, the TFTP program strips the TFTP header, and it finally has the data.
•An ISO-OSI layered model more consistent with Unix might be:
Application Layer (telnet, ftp, etc.)
Host-to-Host Transport Layer (TCP, UDP)
Internet Layer (IP and routing)
Network Access Layer (Ethernet, ATM, or whatever)
these layers correspond to the encapsulation of the original data.
• to build a simple packet, All you have to do for stream sockets is send() the data out. All you have to do for datagram sockets is encapsulate the packet in the method of your choosing and sendto() it out. The kernel builds the Transport Layer and Internet Layer on for you and the hardware does the Network Access Layer.
DATA TYPES USED BY SOCKETS INTERFACE a socket descriptor
• A socket descriptor is Just a regular int. type
WHAT IS SOCKET
•"sockets" are a way to speak to other programs using standard Unix file descriptors.
•Unix programs do any sort of I/O, they do it by reading or writing to a file descriptor.
•A file descriptor is simply an integer associated with an open file. But that file can be a network connection, a FIFO, a pipe, a terminal, a real on-the-disk file, or just about anything else. Everything in Unix is a file! So when you want to communicate with another program over the Internet, do it through a file descriptor.
•To get the file descriptor for network communication, make a call to the socket() system routine. It returns the socket descriptor, and communicate through it using the specialized send() and recv()socket calls.
• the normal read() and write() calls can also be used to communicate through the socket. but send() and recv() offer much greater control over data transmission.
INTERNET SOCKETS
•There are two types of Internet sockets? One is "Stream Sockets"; the other is "Datagram Sockets", which may be referred to as "SOCK_STREAM" and "SOCK_DGRAM", respectively.
•Datagram sockets are sometimes called "connectionless sockets".
STREAM SOCKETS
•Stream sockets are reliable two-way connected communication streams. If you output two items into the socket in the order "1, 2", they will arrive in the order "1, 2" at the opposite end. They will also be error-free.
•What uses stream sockets? the telnet application, uses stream sockets. All the characters you type need to arrive in the same order you type them, Also, web browsers use the HTTP protocol which uses stream sockets to get pages.
•How do stream sockets achieve this high level of data transmission quality? They use a protocol called "The Transmission Control Protocol", known as "TCP". TCP makes sure that data arrives sequentially and error-free.
DATA GRAM SOCKETS
•What about Datagram sockets? Why are they called connectionless? Why are they unreliable? Well, here are some facts: if you send a datagram, it may arrive. It may arrive out of order. If it arrives, the data within the packet will be error-free.
•Datagram sockets also use IP for routing, but they don't use TCP; they use the "User Datagram Protocol", or "UDP".
•Why are they connectionless? it's because you don't have to maintain an open connection as you do with stream sockets. You just build a packet, slap an IP header on it with destination information, and send it out. No connection needed.
•Datagram sockets are generally used either when a TCP stack is unavailable or when a few dropped packets here and there don't mean the end of the Universe.
• Sample applications of datagram sockets are : tftp, bootp, multiplayer games, streaming audio, video conferencing, etc.
•tftp and bootp are used to transfer binary applications from one host to another.
•tftp and similar programs have their own protocol on top of UDP. For example, the tftp protocol says that for each packet that gets sent, the recipient has to send back a packet that says, "I got it!" (an "ACK" packet.) If the sender of the original packet gets no reply in, say, five seconds, he'll re-transmit the packet until he finally gets an ACK. This acknowledgment procedure is very important when implementing reliable SOCK_DGRAM applications.
DATA ENCAPSULATION
•how networks really work, and to show some examples of how SOCK_DGRAM packets are built.
Data Encapsulation.
•Data Encapsulation, it says this: a packet is born, the packet is wrapped ("encapsulated") in a header (and rarely a footer) by the first protocol (say, the TFTP protocol), then the whole thing (TFTP header included) is encapsulated again by the next protocol (say, UDP), then again by the next (IP), then again by the final protocol on the hardware (physical) layer (say, Ethernet).
•When another computer receives the packet, the hardware strips the Ethernet header, the kernel strips the IP and UDP headers, the TFTP program strips the TFTP header, and it finally has the data.
•An ISO-OSI layered model more consistent with Unix might be:
Application Layer (telnet, ftp, etc.)
Host-to-Host Transport Layer (TCP, UDP)
Internet Layer (IP and routing)
Network Access Layer (Ethernet, ATM, or whatever)
these layers correspond to the encapsulation of the original data.
• to build a simple packet, All you have to do for stream sockets is send() the data out. All you have to do for datagram sockets is encapsulate the packet in the method of your choosing and sendto() it out. The kernel builds the Transport Layer and Internet Layer on for you and the hardware does the Network Access Layer.
DATA TYPES USED BY SOCKETS INTERFACE a socket descriptor
• A socket descriptor is Just a regular int. type
sockets
meant for MCA II YEAR STUDENTS TELANGANA UNIVERSITY NIZAMABAD
Socket API
Socket API originated with the 4.2 BSD system released in 1983
Sockets – A way to speak to other programs using UNIX file descriptors.
A file descriptor is an integer associated with an open file.This can be a network connection
Kinds of Sockets-DARPA Internet addresses(Internet Sockets) , Unix Sockets, X.25 Sockets etc
Types of Internet Sockets
SOCK_STREAM uses TCP (Transmission Control Protocol) Connection oriented and Reliable
SOCK_DGRAM uses UDP (User Datagram Protocol)
Connectionless and Unreliable
Structs and Data Handling
A socket descriptor is of type int
Byte ordering
Most significant byte first – Network byte order (Big Endian)
Least significant byte first – Host Byte order ( Little ..)
Socket Structures in Network byte order
struct sockaddr { unsigned short sa_family; // address family, AF_xxx char sa_data[14]; // 14 bytes of protocol address };
struct sockaddr_in { short int sin_family; // Address family
unsigned short int sin_port; // Port number
struct in_addr sin_addr; // Internet address
unsigned char sin_zero[8]; // Same size as struct sockaddr };
Convert the Natives
struct in_addr { unsigned long s_addr; // 32-bit long, or 4 bytes };
If ina is of type struct sockaddr_in
ina.sin_addr.s_addr references the 4-byte IP address (in Network Byte Order
htons() – Host to Network Short
htonl() -- "Host to Network Long"
ntohs() -- "Network to Host Short"
ntohl() -- "Network to Host Long"
IP Addresses
socket01.utdallas.edu 129.110.43.11
sol2.utdallas.edu 129.110.34.2 etc
Other UTD machines for use socket02 – socket06 , sol1 , jupiter
Please do not use apache for Network programming
inet_addr() converts an IP address in numbers-and-dots notation into unsigned long
ina.sin_addr.s_addr = inet_addr(“129.110.43.11”) // Network byte order
Also can use inet_aton() -- “ascii to network”
int inet_aton(const char *cp,struct in_addr *inp);
inet_ntoa returns a string from a struct of type in_addr
inet_ntoa(ina.sin_addr) ;
Useful UNIX Commands
netstat –i prints information about the interfaces
netstat –ni prints this information using numeric addresses
loop back interface is called lo and the ethernet interface is called eth0 or le0 depending on the machine
netstat –r prints the routing table
netstat | grep PORT_NO shows the state of the client socket
ifconfig eth0 – Given the interface name ifconfig gives the details for each interface --- Ethernet Addr , inet_addr , Bcast , Mask , MTU
ping IP_addr -- Sends a packet to the host specified by IP_addr and prints out the roundtrip time ( Uses ICMP messages)
traceroute IP_addr -- Shows the path from this host to the destination printing out the roundtrip time for a packet to each hop in between
Tcpdump communicates directly with Data Link layer UDP Packet fail
System Calls
socket() – returns a socket descriptor
int socket(int domain, int type, int protocol);
bind() – What port I am on / what port to attach to
int bind(int sockfd, struct sockaddr *my_addr, int addrlen);
connect() – Connect to a remote host
int connect(int sockfd, struct sockaddr *serv_addr, int addrlen);
listen() – Waiting for someone to connect to my port
int listen(int sockfd, int backlog);
accept() – Get a file descriptor for a incomming connection
int accept(int sockfd, void *addr, int *addrlen);
send() and recv() – Send and receive data over a connection
int send(int sockfd, const void *msg, int len, int flags);
int recv(int sockfd, void *buf, int len, unsigned int flags);
sendto() and recvfrom() – Send and receive data without connection
int sendto(int sockfd, const void *msg, int len, unsigned int flags, const struct sockaddr *to, int tolen);
int recvfrom(int sockfd, void *buf, int len, unsigned int flags, struct sockaddr *from, int *fromlen);
close() and shutdown() – Close a connection Two way / One way
getpeername() – Obtain the peer name given the socket file descriptor
gethostname() – My computer name
int sock_get_port(const struct sockaddr *sockaddr,socklen_t addrlen);
Useful to get the port number given a struct of type sockaddr
Readn() writen() readline() Read / Write a particular number of bytes
Fork() – To start a new process with parents addr space
Exec() Load a new program on callers addr space
Issues in Client Programming
Identifying the Server.
Looking up a IP address.
Looking up a well known port name.
Specifying a local IP address.
UDP client design.
TCP client design.
Identifying the Server
Options:
hard-coded into the client program.
require that the user identify the server.
read from a configuration file.
use a separate protocol/network service to lookup the identity of the server.
Identifying a TCP/IP server.
Need an IP address, protocol and port.
We often use host names instead of IP addresses.
usually the protocol (UDP vs. TCP) is not specified by the user.
often the port is not specified by the user.
Services and Ports
Many services are available via “well known” addresses (names).
There is a mapping of service names to port numbers:
struct *servent getservbyname( char *service, char *protocol );
servent->s_port is the port number in network byte order.
Specifying a Local Address
When a client creates and binds a socket it must specify a local port and IP address.
Typically a client doesn’t care what port it is on:
haddr->port = htons(0);
Local IP address
A client can also ask the operating system to take care of specifying the local IP address:
haddr->sin_addr.s_addr=
htonl(INADDR_ANY);
UDP Client Design
Establish server address (IP and port).
Allocate a socket.
Specify that any valid local port and IP address can be used.
Communicate with server (send, recv)
Close the socket.
Connected mode UDP
A UDP client can call connect() to establish the address of the server.
The UDP client can then use read() and write() or send() and recv().
A UDP client using a connected mode socket can only talk to one server (using the connected-mode socket).
TCP Client Design
Establish server address (IP and port).
Allocate a socket.
Specify that any valid local port and IP address can be used.
Call connect()
Communicate with server (read,write).
Close the connection.
Closing a TCP socket
Many TCP based application protocols support multiple requests and/or variable length requests over a single TCP connection.
How does the server known when the client is done (and it is OK to close the socket) ?
Partial Close
One solution is for the client to shut down only it’s writing end of the socket.
The shutdown() system call provides this function.
shutdown( int s, int direction);
direction can be 0 to close the reading end or 1 to close the writing end.
shutdown sends info to the other process!
TCP sockets programming
Common problem areas:
null termination of strings.
reads don’t correspond to writes.
synchronization (including close()).
ambiguous protocol.
TCP Reads
Each call to read() on a TCP socket returns any available data (up to a maximum).
TCP buffers data at both ends of the connection.
You must be prepared to accept data 1 byte at a time from a TCP socket!
Server Design
Concurrent vs. Iterative
An iterative server handles a single client request at one time.
A concurrent server can handle multiple client requests at one time.
Concurrent vs. Iterative
Connectionless vs.Connection-Oriented
Statelessness
State: Information that a server maintains about the status of ongoing client interactions.
Connectionless servers that keep state information must be designed carefully!
The Dangers of Statefullness
Clients can go down at any time.
Client hosts can reboot many times.
The network can lose messages.
The network can duplicate messages.
Concurrent ServerDesign Alternatives
One child per client
Spawn one thread per client
Preforking multiple processes
Prethreaded Server
One child per client
Traditional Unix server:
TCP: after call to accept(), call fork().
UDP: after readfrom(), call fork().
Each process needs only a few sockets.
Small requests can be serviced in a small amount of time.
Parent process needs to clean up after children!!!! (call wait() ).
One thread per client
Almost like using fork() - just call pthread_create instead.
Using threads makes it easier (less overhead) to have sibling processes share information.
Sharing information must be done carefully (use pthread_mutex)
Prefork()’d Server
Creating a new process for each client is expensive.
We can create a bunch of processes, each of which can take care of a client.
Each child process is an iterative server.
Prefork()’d TCP Server
Initial process creates socket and binds to well known address.
Process now calls fork() a bunch of times.
All children call accept().
The next incoming connection will be handed to one child.
Preforking
As the book shows, having too many preforked children can be bad.
Using dynamic process allocation instead of a hard-coded number of children can avoid problems.
The parent process just manages the children, doesn’t worry about clients.
Sockets library vs. system call
A preforked TCP server won’t usually work the way we want if sockets is not part of the kernel:
calling accept() is a library call, not an atomic operation.
We can get around this by making sure only one child calls accept() at a time using some locking scheme.
Prethreaded Server
Same benefits as preforking.
Can also have the main thread do all the calls to accept() and hand off each client to an existing thread.
What’s the best server design for my application?
Many factors:
expected number of simultaneous clients.
Transaction size (time to compute or lookup the answer)
Variability in transaction size.
Available system resources (perhaps what resources can be required in order to run the service).
Server Design
It is important to understand the issues and options.
Knowledge of queuing theory can be a big help.
You might need to test a few alternatives to determine the best design.
Socket API
Socket API originated with the 4.2 BSD system released in 1983
Sockets – A way to speak to other programs using UNIX file descriptors.
A file descriptor is an integer associated with an open file.This can be a network connection
Kinds of Sockets-DARPA Internet addresses(Internet Sockets) , Unix Sockets, X.25 Sockets etc
Types of Internet Sockets
SOCK_STREAM uses TCP (Transmission Control Protocol) Connection oriented and Reliable
SOCK_DGRAM uses UDP (User Datagram Protocol)
Connectionless and Unreliable
Structs and Data Handling
A socket descriptor is of type int
Byte ordering
Most significant byte first – Network byte order (Big Endian)
Least significant byte first – Host Byte order ( Little ..)
Socket Structures in Network byte order
struct sockaddr { unsigned short sa_family; // address family, AF_xxx char sa_data[14]; // 14 bytes of protocol address };
struct sockaddr_in { short int sin_family; // Address family
unsigned short int sin_port; // Port number
struct in_addr sin_addr; // Internet address
unsigned char sin_zero[8]; // Same size as struct sockaddr };
Convert the Natives
struct in_addr { unsigned long s_addr; // 32-bit long, or 4 bytes };
If ina is of type struct sockaddr_in
ina.sin_addr.s_addr references the 4-byte IP address (in Network Byte Order
htons() – Host to Network Short
htonl() -- "Host to Network Long"
ntohs() -- "Network to Host Short"
ntohl() -- "Network to Host Long"
IP Addresses
socket01.utdallas.edu 129.110.43.11
sol2.utdallas.edu 129.110.34.2 etc
Other UTD machines for use socket02 – socket06 , sol1 , jupiter
Please do not use apache for Network programming
inet_addr() converts an IP address in numbers-and-dots notation into unsigned long
ina.sin_addr.s_addr = inet_addr(“129.110.43.11”) // Network byte order
Also can use inet_aton() -- “ascii to network”
int inet_aton(const char *cp,struct in_addr *inp);
inet_ntoa returns a string from a struct of type in_addr
inet_ntoa(ina.sin_addr) ;
Useful UNIX Commands
netstat –i prints information about the interfaces
netstat –ni prints this information using numeric addresses
loop back interface is called lo and the ethernet interface is called eth0 or le0 depending on the machine
netstat –r prints the routing table
netstat | grep PORT_NO shows the state of the client socket
ifconfig eth0 – Given the interface name ifconfig gives the details for each interface --- Ethernet Addr , inet_addr , Bcast , Mask , MTU
ping IP_addr -- Sends a packet to the host specified by IP_addr and prints out the roundtrip time ( Uses ICMP messages)
traceroute IP_addr -- Shows the path from this host to the destination printing out the roundtrip time for a packet to each hop in between
Tcpdump communicates directly with Data Link layer UDP Packet fail
System Calls
socket() – returns a socket descriptor
int socket(int domain, int type, int protocol);
bind() – What port I am on / what port to attach to
int bind(int sockfd, struct sockaddr *my_addr, int addrlen);
connect() – Connect to a remote host
int connect(int sockfd, struct sockaddr *serv_addr, int addrlen);
listen() – Waiting for someone to connect to my port
int listen(int sockfd, int backlog);
accept() – Get a file descriptor for a incomming connection
int accept(int sockfd, void *addr, int *addrlen);
send() and recv() – Send and receive data over a connection
int send(int sockfd, const void *msg, int len, int flags);
int recv(int sockfd, void *buf, int len, unsigned int flags);
sendto() and recvfrom() – Send and receive data without connection
int sendto(int sockfd, const void *msg, int len, unsigned int flags, const struct sockaddr *to, int tolen);
int recvfrom(int sockfd, void *buf, int len, unsigned int flags, struct sockaddr *from, int *fromlen);
close() and shutdown() – Close a connection Two way / One way
getpeername() – Obtain the peer name given the socket file descriptor
gethostname() – My computer name
int sock_get_port(const struct sockaddr *sockaddr,socklen_t addrlen);
Useful to get the port number given a struct of type sockaddr
Readn() writen() readline() Read / Write a particular number of bytes
Fork() – To start a new process with parents addr space
Exec() Load a new program on callers addr space
Issues in Client Programming
Identifying the Server.
Looking up a IP address.
Looking up a well known port name.
Specifying a local IP address.
UDP client design.
TCP client design.
Identifying the Server
Options:
hard-coded into the client program.
require that the user identify the server.
read from a configuration file.
use a separate protocol/network service to lookup the identity of the server.
Identifying a TCP/IP server.
Need an IP address, protocol and port.
We often use host names instead of IP addresses.
usually the protocol (UDP vs. TCP) is not specified by the user.
often the port is not specified by the user.
Services and Ports
Many services are available via “well known” addresses (names).
There is a mapping of service names to port numbers:
struct *servent getservbyname( char *service, char *protocol );
servent->s_port is the port number in network byte order.
Specifying a Local Address
When a client creates and binds a socket it must specify a local port and IP address.
Typically a client doesn’t care what port it is on:
haddr->port = htons(0);
Local IP address
A client can also ask the operating system to take care of specifying the local IP address:
haddr->sin_addr.s_addr=
htonl(INADDR_ANY);
UDP Client Design
Establish server address (IP and port).
Allocate a socket.
Specify that any valid local port and IP address can be used.
Communicate with server (send, recv)
Close the socket.
Connected mode UDP
A UDP client can call connect() to establish the address of the server.
The UDP client can then use read() and write() or send() and recv().
A UDP client using a connected mode socket can only talk to one server (using the connected-mode socket).
TCP Client Design
Establish server address (IP and port).
Allocate a socket.
Specify that any valid local port and IP address can be used.
Call connect()
Communicate with server (read,write).
Close the connection.
Closing a TCP socket
Many TCP based application protocols support multiple requests and/or variable length requests over a single TCP connection.
How does the server known when the client is done (and it is OK to close the socket) ?
Partial Close
One solution is for the client to shut down only it’s writing end of the socket.
The shutdown() system call provides this function.
shutdown( int s, int direction);
direction can be 0 to close the reading end or 1 to close the writing end.
shutdown sends info to the other process!
TCP sockets programming
Common problem areas:
null termination of strings.
reads don’t correspond to writes.
synchronization (including close()).
ambiguous protocol.
TCP Reads
Each call to read() on a TCP socket returns any available data (up to a maximum).
TCP buffers data at both ends of the connection.
You must be prepared to accept data 1 byte at a time from a TCP socket!
Server Design
Concurrent vs. Iterative
An iterative server handles a single client request at one time.
A concurrent server can handle multiple client requests at one time.
Concurrent vs. Iterative
Connectionless vs.Connection-Oriented
Statelessness
State: Information that a server maintains about the status of ongoing client interactions.
Connectionless servers that keep state information must be designed carefully!
The Dangers of Statefullness
Clients can go down at any time.
Client hosts can reboot many times.
The network can lose messages.
The network can duplicate messages.
Concurrent ServerDesign Alternatives
One child per client
Spawn one thread per client
Preforking multiple processes
Prethreaded Server
One child per client
Traditional Unix server:
TCP: after call to accept(), call fork().
UDP: after readfrom(), call fork().
Each process needs only a few sockets.
Small requests can be serviced in a small amount of time.
Parent process needs to clean up after children!!!! (call wait() ).
One thread per client
Almost like using fork() - just call pthread_create instead.
Using threads makes it easier (less overhead) to have sibling processes share information.
Sharing information must be done carefully (use pthread_mutex)
Prefork()’d Server
Creating a new process for each client is expensive.
We can create a bunch of processes, each of which can take care of a client.
Each child process is an iterative server.
Prefork()’d TCP Server
Initial process creates socket and binds to well known address.
Process now calls fork() a bunch of times.
All children call accept().
The next incoming connection will be handed to one child.
Preforking
As the book shows, having too many preforked children can be bad.
Using dynamic process allocation instead of a hard-coded number of children can avoid problems.
The parent process just manages the children, doesn’t worry about clients.
Sockets library vs. system call
A preforked TCP server won’t usually work the way we want if sockets is not part of the kernel:
calling accept() is a library call, not an atomic operation.
We can get around this by making sure only one child calls accept() at a time using some locking scheme.
Prethreaded Server
Same benefits as preforking.
Can also have the main thread do all the calls to accept() and hand off each client to an existing thread.
What’s the best server design for my application?
Many factors:
expected number of simultaneous clients.
Transaction size (time to compute or lookup the answer)
Variability in transaction size.
Available system resources (perhaps what resources can be required in order to run the service).
Server Design
It is important to understand the issues and options.
Knowledge of queuing theory can be a big help.
You might need to test a few alternatives to determine the best design.
unix network programs and computer networks
meant for MCA II year students Telangana university Nizamabad
/* Iterative TCP client */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr;
pname=argv[0];
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(atoi(argv[1]));
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("client:socket error");
if(connect(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("client:cant connect to error");
exit(1);
}
str_cli(stdin,sockfd);
close(sockfd);
exit(0);
}
/* Iterative TCP server program */
#include "file.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid,i;
struct sockaddr_in cli_addr,serv_addr,peer;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("server:cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr("172.16.0.1");
// serv_addr.sin_port=htons(SERV_TCP_PORT);
serv_addr.sin_port=htons(atoi(argv[1]));
if(bind(sockfd,(struct sockaddr *)&serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,1);
// getsockname(sockfd,(struct sockaddr *) &peer,sizeof(peer));
// printf("\nIP= %s",inet_ntoa(peer.sin_addr.s_addr));
fflush(stdout);
for( i=0;i<6;++i)
{
printf("\nServer is waiting for a connection request::\n");
clilen=sizeof(cli_addr);
newsockfd=accept(sockfd,(struct sockaddr *)&cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
str_echo(newsockfd);
close(newsockfd);
}
printf("\nServer is exiting.....");
}
/* Iterative TCP client */
#include "files.h"
main(int argc,char *argv[])
{
int sockfd;
struct sockaddr_in serv_addr;
pname=argv[0];
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr(SERV_HOST_ADDRESS);
serv_addr.sin_port=htons(atoi(argv[1]));
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("client:socket error");
if(connect(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr))<0)
{
perror("client:cant connect to error");
exit(1);
}
str_cli(stdin,sockfd);
close(sockfd);
exit(0);
}
/* Iterative TCP server program */
#include "file.h"
main(int argc,char *argv[])
{
int sockfd,newsockfd,clilen,childpid,i;
struct sockaddr_in cli_addr,serv_addr,peer;
pname=argv[0];
if((sockfd=socket(AF_INET,SOCK_STREAM,0))<0)
perror("server:cant open stream socket");
bzero((char *)&serv_addr,sizeof(serv_addr));
serv_addr.sin_family=AF_INET;
serv_addr.sin_addr.s_addr=inet_addr("172.16.0.1");
// serv_addr.sin_port=htons(SERV_TCP_PORT);
serv_addr.sin_port=htons(atoi(argv[1]));
if(bind(sockfd,(struct sockaddr *)&serv_addr,sizeof(serv_addr))<0)
{
perror("server:cant bind local address");
exit(1);
}
listen(sockfd,1);
// getsockname(sockfd,(struct sockaddr *) &peer,sizeof(peer));
// printf("\nIP= %s",inet_ntoa(peer.sin_addr.s_addr));
fflush(stdout);
for( i=0;i<6;++i)
{
printf("\nServer is waiting for a connection request::\n");
clilen=sizeof(cli_addr);
newsockfd=accept(sockfd,(struct sockaddr *)&cli_addr,&clilen);
if(newsockfd<0)
{
perror("server:accept error");
exit(1);
}
str_echo(newsockfd);
close(newsockfd);
}
printf("\nServer is exiting.....");
}
Sunday, January 6, 2008
Data Mining and Warehousing 05-01-2008
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
What is Data Warehouse?
Defined in many different ways, but not rigorously.
A decision support database that is maintained separately from the organization’s operational database
Support information processing by providing a solid platform of consolidated, historical data for analysis.
“A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process.”—W. H. Inmon
Data warehousing:
The process of constructing and using data warehouses
Data Warehouse—Subject-Oriented
Provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.
Organized around major subjects, such as customer, product, sales.
Focusing on the modeling and analysis of data for decision makers, not on daily operations or transaction processing.
Data Warehouse—Integrated
Constructed by integrating multiple, heterogeneous data sources
relational databases, flat files, on-line transaction records
Data cleaning and data integration techniques are applied.
Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources
E.g., Hotel price: currency, tax, breakfast covered, etc.
When data is moved to the warehouse, it is converted.
Data Warehouse—Time Variant
The time horizon for the data warehouse is significantly longer than that of operational systems.
Operational database: current value data.
Data warehouse data: provide information from a historical perspective (e.g., past 5-10 years)
Every key structure in the data warehouse
Contains an element of time, explicitly or implicitly
However, the key of operational data may or may not contain “time element”.
Data Warehouse—Non-Volatile
A physically separate store of data transformed from the operational environment.
Operational update of data does not necessarily occur in the data warehouse environment.
Does not require transaction processing, recovery, and concurrency control mechanisms
Often requires only two operations in data accessing:
initial loading of data and access of data.
Data Warehouse vs. Heterogeneous DBMS
Traditional heterogeneous DB integration:
Build wrappers/mediators on top of heterogeneous databases
Query driven approach
A query posed to a client site is translated into queries appropriate for individual heterogeneous sites; The results are integrated into a global answer set
Involving complex information filtering
Competition for resources at local sources
Data warehouse: update-driven, high performance
Information from heterogeneous sources is integrated in advance and stored in warehouses for direct query and analysis
Data Warehouse vs. Operational DBMS
OLTP (on-line transaction processing)
Major task of traditional relational DBMS
Day-to-day operations: purchasing, inventory, banking, manufacturing, payroll, registration, accounting, etc.
OLAP (on-line analytical processing)
Major task of data warehouse system
Data analysis and decision making
Distinct features (OLTP vs. OLAP):
User and system orientation: customer vs. market
Data contents: current, detailed vs. historical, consolidated
Database design: ER + application vs. star + subject
View: current, local vs. evolutionary, integrated
Access patterns: update vs. read-only but complex queries
Why Separate Data Warehouse?
High performance for both systems
DBMS— tuned for OLTP: access methods, indexing, concurrency control, recovery
Warehouse—tuned for OLAP: complex OLAP queries, multidimensional view, consolidation.
Different functions and different data:
Decision support requires historical data which operational DBs do not typically maintain
Decision Support requires consolidation (aggregation, summarization) of data from heterogeneous sources
Different sources typically use inconsistent data representations, codes and formats which have to be reconciled
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
A Multi-Dimensional Data Model
A data warehouse is based on a multidimensional data model which views data in the form of a data cube
A data cube allows data to be modeled and viewed in multiple dimensions
Dimension tables, such as item (item_name, brand, type), or time(day, week, month, quarter, year)
Fact table contains measures (such as dollars_sold) and keys to each of the related dimension tables
In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The lattice of cuboids forms a data cube.
A Sample Data Cube
4-D Data Cube
Cube: A Lattice of Cuboids
Conceptual Modeling of Data Warehouses
Modeling data warehouses: dimensions & measures
Star schema: A fact table in the middle connected to a set of dimension tables
Snowflake schema: A refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake
Fact constellations: Multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation
Example of Star Schema
Example of Snowflake Schema
Example of Fact Constellation
A Data Mining Query Language, DMQL: Language Primitives
Cube Definition (Fact Table)
define cube []:
Dimension Definition (Dimension Table)
define dimension as ()
Special Case (Shared Dimension Tables)
First time as “cube definition”
define dimension as in cube
Defining a Star Schema in DMQL
define cube sales_star [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)
Defining a Snowflake Schema in DMQL
define cube sales_snowflake [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier(supplier_key, supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key, province_or_state, country))
Defining a Fact Constellation in DMQL
define cube sales [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)
define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location in cube sales, shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales
Measures: Three Categories
Measure: a function evaluated on aggregated data corresponding to given dimension-value pairs.
Measures can be:
distributive: if the measure can be calculated in a distributive manner.
E.g., count(), sum(), min(), max().
algebraic: if it can be computed from arguments obtained by applying distributive aggregate functions.
E.g., avg()=sum()/count(), min_N(), standard_deviation().
holistic: if it is not algebraic.
E.g., median(), mode(), rank().
Measures: Three Categories
Distributive and algebraic measures are ideal for data cubes.
Calculated measures at lower levels can be used directly at higher levels.
Holistic measures can be difficult to calculate efficiently.
Holistic measures could often be efficiently approximated.
Browsing a Data Cube
Visualization
OLAP capabilities
Interactive manipulation
A Concept Hierarchy
Concept hierarchies allow data to be handled at varying levels of abstraction
Typical OLAP Operations (Fig 2.10)
Roll up (drill-up): summarize data
by climbing up concept hierarchy or by dimension reduction
Drill down (roll down): reverse of roll-up
from higher level summary to lower level summary or detailed data, or introducing new dimensions
Slice and dice:
project and select
Pivot (rotate):
reorient the cube, visualization, 3D to series of 2D planes.
Other operations
drill across: involving (across) more than one fact table
drill through: through the bottom level of the cube to its back-end relational tables (using SQL)
Querying Using a Star-Net Model
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Data Warehouse Design Process
Top-down, bottom-up approaches or a combination of both
Top-down: Starts with overall design and planning (mature)
Bottom-up: Starts with experiments and prototypes (rapid)
From software engineering point of view
Waterfall: structured and systematic analysis at each step before proceeding to the next
Spiral: rapid generation of increasingly functional systems, quick modifications, timely adaptation of new designs and technologies
Typical data warehouse design process
Choose a business process to model, e.g., orders, invoices, etc.
Choose the grain (atomic level of data) of the business process
Choose the dimensions that will apply to each fact table record
Choose the measure that will populate each fact table record
Three Data Warehouse Models
Enterprise warehouse
collects all of the information about subjects spanning the entire organization
Data Mart
a subset of corporate-wide data that is of value to a specific groups of users. Its scope is confined to specific, selected groups, such as marketing data mart
Independent vs. dependent (directly from warehouse) data mart
Virtual warehouse
A set of views over operational databases
Only some of the possible summary views may be materialized
OLAP Server Architectures
Relational OLAP (ROLAP)
Use relational or extended-relational DBMS to store and manage warehouse data
Include optimization of DBMS backend and additional tools and services
greater scalability
Multidimensional OLAP (MOLAP)
Array-based multidimensional storage engine (sparse matrix techniques)
fast indexing to pre-computed summarized data
Hybrid OLAP (HOLAP)
User flexibility (low level: relational, high-level: array)
Specialized SQL servers
specialized support for SQL queries over star/snowflake schemas
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Efficient Data Cube Computation
Data cube can be viewed as a lattice of cuboids
The bottom-most cuboid is the base cuboid
The top-most cuboid (apex) contains only one cell
How many cuboids in an n-dimensional cube with L levels?
Materialization of data cube
Materialize every (cuboid) (full materialization), none (no materialization), or some (partial materialization)
Selection of which cuboids to materialize
Based on size, sharing, access frequency, etc.
Cube Operation
Cube definition and computation in DMQL
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales
Transform it into a SQL-like language (with a new operator cube by, introduced by Gray et al.’96)
SELECT item, city, year, SUM (amount)
FROM SALES
CUBE BY item, city, year
Need compute the following Group-Bys
(date, product, customer),
(date,product),(date, customer), (product, customer),
(date), (product), (customer)
()
Cube Computation: ROLAP vs. MOLAP
ROLAP-based cubing algorithms
Key-based addressing
Sorting, hashing, and grouping operations are applied to the dimension attributes to reorder and cluster related tuples
Aggregates may be computed from previously computed aggregates, rather than from the base fact table
MOLAP-based cubing algorithms
Direct array addressing
Partition the array into chunks that fit the memory
Compute aggregates by visiting cube chunks
Possible to exploit ordering of chunks for faster calculation
Multiway Array Aggregation for MOLAP
Partition arrays into chunks (a small subcube which fits in memory).
Compressed sparse array addressing: (chunk_id, offset)
Compute aggregates in “multiway” by visiting cube cells in the order which minimizes the # of times to visit each cell, and reduces memory access and storage cost.
Multiway Array Aggregation for MOLAP
Multiway Array Aggregation for MOLAP
Multiway Array Aggregation for MOLAP
Method: the planes should be sorted and computed according to their size in ascending order.
The proposed scan is optimal if |C|>|B|>|A|
See the details of Example 2.12 (pp. 75-78)
MOLAP cube computation is faster than ROLAP
Limitation of MOLAP: computing well only for a small number of dimensions
If there are a large number of dimensions use the iceberg cube computation: process only “dense” chunks
Indexing OLAP Data: Bitmap Index
Suitable for low cardinality domains
Index on a particular column
Each value in the column has a bit vector: bit-op is fast
The length of the bit vector: # of records in the base table
The i-th bit is set if the i-th row of the base table has the value for the indexed column
Indexing OLAP Data: Join Indices
Join index materializes relational join and speeds up relational join — a rather costly operation
In data warehouses, join index relates the values of the dimensions of a start schema to rows in the fact table.
E.g. fact table: Sales and two dimensions location and item
A join index on location is a list of pairs sorted by location
A join index on location-and-item is a list of triples sorted by location and item names
Search of a join index can still be slow
Bitmapped join index allows speed-up by using bit vectors instead of dimension attribute names
Online Aggregation
Consider an aggregate query:
“finding the average sales by state“
Can we provide the user with some information before the exact average is computed for all states?
Solution: show the current “running average” for each state as the computation proceeds.
Even better, if we use statistical techniques and sample tuples to aggregate instead of simply scanning the aggregated table, we can provide bounds such as “the average for Wisconsin is 2000±102 with 95% probability.
Efficient Processing of OLAP Queries
Determine which operations should be performed on the available cuboids:
transform drill, roll, etc. into corresponding SQL and/or OLAP operations, e.g, dice = selection + projection
Determine to which materialized cuboid(s) the relevant operations should be applied.
Exploring indexing structures and compressed vs. dense array structures in MOLAP (trade-off between indexing and storage performance)
Metadata Repository
Meta data is the data defining warehouse objects. It has the following kinds
Description of the structure of the warehouse
schema, view, dimensions, hierarchies, derived data definitions, data mart locations and contents
Operational meta-data
data lineage (history of migrated data and transformation path), currency of data (active, archived, or purged), monitoring information (warehouse usage statistics, error reports, audit trails)
The algorithms used for summarization
The mapping from operational environment to the data warehouse
Data related to system performance
warehouse schema, view and derived data definitions
Business data
business terms and definitions, ownership of data, charging policies
Data Warehouse Back-End Tools and Utilities
Data extraction:
get data from multiple, heterogeneous, and external sources
Data cleaning:
detect errors in the data and rectify them when possible
Data transformation:
convert data from legacy or host format to warehouse format
Load:
sort, summarize, consolidate, compute views, check integrity, and build indices and partitions
Refresh
propagate the updates from the data sources to the warehouse
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Discovery-Driven Exploration of Data Cubes
Hypothesis-driven: exploration by user, huge search space
Discovery-driven (Sarawagi et al.’98)
pre-compute measures indicating exceptions, guide user in the data analysis, at all levels of aggregation
Exception: significantly different from the value anticipated, based on a statistical model
Visual cues such as background color are used to reflect the degree of exception of each cell
Computation of exception indicator can be overlapped with cube construction
Examples: Discovery-Driven Data Cubes
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Data Warehouse Usage
Three kinds of data warehouse applications
Information processing
supports querying, basic statistical analysis, and reporting using crosstabs, tables, charts and graphs
Analytical processing
multidimensional analysis of data warehouse data
supports basic OLAP operations, slice-dice, drilling, pivoting
Data mining
knowledge discovery from hidden patterns
supports associations, constructing analytical models, performing classification and prediction, and presenting the mining results using visualization tools.
Differences among the three tasks
From On-Line Analytical Processing to On Line Analytical Mining (OLAM)
Why online analytical mining?
High quality of data in data warehouses
DW contains integrated, consistent, cleaned data
Available information processing structure surrounding data warehouses
ODBC, OLEDB, Web accessing, service facilities, reporting and OLAP tools
OLAP-based exploratory data analysis
mining with drilling, dicing, pivoting, etc.
On-line selection of data mining functions
integration and swapping of multiple mining functions, algorithms, and tasks.
Architecture of OLAM
Summary
Data warehouse
A subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process
A multi-dimensional model of a data warehouse
Star schema, snowflake schema, fact constellations
A data cube consists of dimensions & measures
OLAP operations: drilling, rolling, slicing, dicing and pivoting
OLAP servers: ROLAP, MOLAP, HOLAP
Efficient computation of data cubes
Partial vs. full vs. no materialization
Multiway array aggregation
Bitmap index and join index implementations
Further development of data cube technology
Discovery-drive and multi-feature cubes
From OLAP to OLAM (on-line analytical mining)
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
What is Data Warehouse?
Defined in many different ways, but not rigorously.
A decision support database that is maintained separately from the organization’s operational database
Support information processing by providing a solid platform of consolidated, historical data for analysis.
“A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process.”—W. H. Inmon
Data warehousing:
The process of constructing and using data warehouses
Data Warehouse—Subject-Oriented
Provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.
Organized around major subjects, such as customer, product, sales.
Focusing on the modeling and analysis of data for decision makers, not on daily operations or transaction processing.
Data Warehouse—Integrated
Constructed by integrating multiple, heterogeneous data sources
relational databases, flat files, on-line transaction records
Data cleaning and data integration techniques are applied.
Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources
E.g., Hotel price: currency, tax, breakfast covered, etc.
When data is moved to the warehouse, it is converted.
Data Warehouse—Time Variant
The time horizon for the data warehouse is significantly longer than that of operational systems.
Operational database: current value data.
Data warehouse data: provide information from a historical perspective (e.g., past 5-10 years)
Every key structure in the data warehouse
Contains an element of time, explicitly or implicitly
However, the key of operational data may or may not contain “time element”.
Data Warehouse—Non-Volatile
A physically separate store of data transformed from the operational environment.
Operational update of data does not necessarily occur in the data warehouse environment.
Does not require transaction processing, recovery, and concurrency control mechanisms
Often requires only two operations in data accessing:
initial loading of data and access of data.
Data Warehouse vs. Heterogeneous DBMS
Traditional heterogeneous DB integration:
Build wrappers/mediators on top of heterogeneous databases
Query driven approach
A query posed to a client site is translated into queries appropriate for individual heterogeneous sites; The results are integrated into a global answer set
Involving complex information filtering
Competition for resources at local sources
Data warehouse: update-driven, high performance
Information from heterogeneous sources is integrated in advance and stored in warehouses for direct query and analysis
Data Warehouse vs. Operational DBMS
OLTP (on-line transaction processing)
Major task of traditional relational DBMS
Day-to-day operations: purchasing, inventory, banking, manufacturing, payroll, registration, accounting, etc.
OLAP (on-line analytical processing)
Major task of data warehouse system
Data analysis and decision making
Distinct features (OLTP vs. OLAP):
User and system orientation: customer vs. market
Data contents: current, detailed vs. historical, consolidated
Database design: ER + application vs. star + subject
View: current, local vs. evolutionary, integrated
Access patterns: update vs. read-only but complex queries
Why Separate Data Warehouse?
High performance for both systems
DBMS— tuned for OLTP: access methods, indexing, concurrency control, recovery
Warehouse—tuned for OLAP: complex OLAP queries, multidimensional view, consolidation.
Different functions and different data:
Decision support requires historical data which operational DBs do not typically maintain
Decision Support requires consolidation (aggregation, summarization) of data from heterogeneous sources
Different sources typically use inconsistent data representations, codes and formats which have to be reconciled
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
A Multi-Dimensional Data Model
A data warehouse is based on a multidimensional data model which views data in the form of a data cube
A data cube allows data to be modeled and viewed in multiple dimensions
Dimension tables, such as item (item_name, brand, type), or time(day, week, month, quarter, year)
Fact table contains measures (such as dollars_sold) and keys to each of the related dimension tables
In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The lattice of cuboids forms a data cube.
A Sample Data Cube
4-D Data Cube
Cube: A Lattice of Cuboids
Conceptual Modeling of Data Warehouses
Modeling data warehouses: dimensions & measures
Star schema: A fact table in the middle connected to a set of dimension tables
Snowflake schema: A refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake
Fact constellations: Multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation
Example of Star Schema
Example of Snowflake Schema
Example of Fact Constellation
A Data Mining Query Language, DMQL: Language Primitives
Cube Definition (Fact Table)
define cube
Dimension Definition (Dimension Table)
define dimension
Special Case (Shared Dimension Tables)
First time as “cube definition”
define dimension
Defining a Star Schema in DMQL
define cube sales_star [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)
Defining a Snowflake Schema in DMQL
define cube sales_snowflake [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier(supplier_key, supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key, province_or_state, country))
Defining a Fact Constellation in DMQL
define cube sales [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)
define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location in cube sales, shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales
Measures: Three Categories
Measure: a function evaluated on aggregated data corresponding to given dimension-value pairs.
Measures can be:
distributive: if the measure can be calculated in a distributive manner.
E.g., count(), sum(), min(), max().
algebraic: if it can be computed from arguments obtained by applying distributive aggregate functions.
E.g., avg()=sum()/count(), min_N(), standard_deviation().
holistic: if it is not algebraic.
E.g., median(), mode(), rank().
Measures: Three Categories
Distributive and algebraic measures are ideal for data cubes.
Calculated measures at lower levels can be used directly at higher levels.
Holistic measures can be difficult to calculate efficiently.
Holistic measures could often be efficiently approximated.
Browsing a Data Cube
Visualization
OLAP capabilities
Interactive manipulation
A Concept Hierarchy
Concept hierarchies allow data to be handled at varying levels of abstraction
Typical OLAP Operations (Fig 2.10)
Roll up (drill-up): summarize data
by climbing up concept hierarchy or by dimension reduction
Drill down (roll down): reverse of roll-up
from higher level summary to lower level summary or detailed data, or introducing new dimensions
Slice and dice:
project and select
Pivot (rotate):
reorient the cube, visualization, 3D to series of 2D planes.
Other operations
drill across: involving (across) more than one fact table
drill through: through the bottom level of the cube to its back-end relational tables (using SQL)
Querying Using a Star-Net Model
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Data Warehouse Design Process
Top-down, bottom-up approaches or a combination of both
Top-down: Starts with overall design and planning (mature)
Bottom-up: Starts with experiments and prototypes (rapid)
From software engineering point of view
Waterfall: structured and systematic analysis at each step before proceeding to the next
Spiral: rapid generation of increasingly functional systems, quick modifications, timely adaptation of new designs and technologies
Typical data warehouse design process
Choose a business process to model, e.g., orders, invoices, etc.
Choose the grain (atomic level of data) of the business process
Choose the dimensions that will apply to each fact table record
Choose the measure that will populate each fact table record
Three Data Warehouse Models
Enterprise warehouse
collects all of the information about subjects spanning the entire organization
Data Mart
a subset of corporate-wide data that is of value to a specific groups of users. Its scope is confined to specific, selected groups, such as marketing data mart
Independent vs. dependent (directly from warehouse) data mart
Virtual warehouse
A set of views over operational databases
Only some of the possible summary views may be materialized
OLAP Server Architectures
Relational OLAP (ROLAP)
Use relational or extended-relational DBMS to store and manage warehouse data
Include optimization of DBMS backend and additional tools and services
greater scalability
Multidimensional OLAP (MOLAP)
Array-based multidimensional storage engine (sparse matrix techniques)
fast indexing to pre-computed summarized data
Hybrid OLAP (HOLAP)
User flexibility (low level: relational, high-level: array)
Specialized SQL servers
specialized support for SQL queries over star/snowflake schemas
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Efficient Data Cube Computation
Data cube can be viewed as a lattice of cuboids
The bottom-most cuboid is the base cuboid
The top-most cuboid (apex) contains only one cell
How many cuboids in an n-dimensional cube with L levels?
Materialization of data cube
Materialize every (cuboid) (full materialization), none (no materialization), or some (partial materialization)
Selection of which cuboids to materialize
Based on size, sharing, access frequency, etc.
Cube Operation
Cube definition and computation in DMQL
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales
Transform it into a SQL-like language (with a new operator cube by, introduced by Gray et al.’96)
SELECT item, city, year, SUM (amount)
FROM SALES
CUBE BY item, city, year
Need compute the following Group-Bys
(date, product, customer),
(date,product),(date, customer), (product, customer),
(date), (product), (customer)
()
Cube Computation: ROLAP vs. MOLAP
ROLAP-based cubing algorithms
Key-based addressing
Sorting, hashing, and grouping operations are applied to the dimension attributes to reorder and cluster related tuples
Aggregates may be computed from previously computed aggregates, rather than from the base fact table
MOLAP-based cubing algorithms
Direct array addressing
Partition the array into chunks that fit the memory
Compute aggregates by visiting cube chunks
Possible to exploit ordering of chunks for faster calculation
Multiway Array Aggregation for MOLAP
Partition arrays into chunks (a small subcube which fits in memory).
Compressed sparse array addressing: (chunk_id, offset)
Compute aggregates in “multiway” by visiting cube cells in the order which minimizes the # of times to visit each cell, and reduces memory access and storage cost.
Multiway Array Aggregation for MOLAP
Multiway Array Aggregation for MOLAP
Multiway Array Aggregation for MOLAP
Method: the planes should be sorted and computed according to their size in ascending order.
The proposed scan is optimal if |C|>|B|>|A|
See the details of Example 2.12 (pp. 75-78)
MOLAP cube computation is faster than ROLAP
Limitation of MOLAP: computing well only for a small number of dimensions
If there are a large number of dimensions use the iceberg cube computation: process only “dense” chunks
Indexing OLAP Data: Bitmap Index
Suitable for low cardinality domains
Index on a particular column
Each value in the column has a bit vector: bit-op is fast
The length of the bit vector: # of records in the base table
The i-th bit is set if the i-th row of the base table has the value for the indexed column
Indexing OLAP Data: Join Indices
Join index materializes relational join and speeds up relational join — a rather costly operation
In data warehouses, join index relates the values of the dimensions of a start schema to rows in the fact table.
E.g. fact table: Sales and two dimensions location and item
A join index on location is a list of pairs
A join index on location-and-item is a list of triples
Search of a join index can still be slow
Bitmapped join index allows speed-up by using bit vectors instead of dimension attribute names
Online Aggregation
Consider an aggregate query:
“finding the average sales by state“
Can we provide the user with some information before the exact average is computed for all states?
Solution: show the current “running average” for each state as the computation proceeds.
Even better, if we use statistical techniques and sample tuples to aggregate instead of simply scanning the aggregated table, we can provide bounds such as “the average for Wisconsin is 2000±102 with 95% probability.
Efficient Processing of OLAP Queries
Determine which operations should be performed on the available cuboids:
transform drill, roll, etc. into corresponding SQL and/or OLAP operations, e.g, dice = selection + projection
Determine to which materialized cuboid(s) the relevant operations should be applied.
Exploring indexing structures and compressed vs. dense array structures in MOLAP (trade-off between indexing and storage performance)
Metadata Repository
Meta data is the data defining warehouse objects. It has the following kinds
Description of the structure of the warehouse
schema, view, dimensions, hierarchies, derived data definitions, data mart locations and contents
Operational meta-data
data lineage (history of migrated data and transformation path), currency of data (active, archived, or purged), monitoring information (warehouse usage statistics, error reports, audit trails)
The algorithms used for summarization
The mapping from operational environment to the data warehouse
Data related to system performance
warehouse schema, view and derived data definitions
Business data
business terms and definitions, ownership of data, charging policies
Data Warehouse Back-End Tools and Utilities
Data extraction:
get data from multiple, heterogeneous, and external sources
Data cleaning:
detect errors in the data and rectify them when possible
Data transformation:
convert data from legacy or host format to warehouse format
Load:
sort, summarize, consolidate, compute views, check integrity, and build indices and partitions
Refresh
propagate the updates from the data sources to the warehouse
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Discovery-Driven Exploration of Data Cubes
Hypothesis-driven: exploration by user, huge search space
Discovery-driven (Sarawagi et al.’98)
pre-compute measures indicating exceptions, guide user in the data analysis, at all levels of aggregation
Exception: significantly different from the value anticipated, based on a statistical model
Visual cues such as background color are used to reflect the degree of exception of each cell
Computation of exception indicator can be overlapped with cube construction
Examples: Discovery-Driven Data Cubes
Chapter 2: Data Warehousing and OLAP Technology for Data Mining
What is a data warehouse?
A multi-dimensional data model
Data warehouse architecture
Data warehouse implementation
Further development of data cube technology
From data warehousing to data mining
Data Warehouse Usage
Three kinds of data warehouse applications
Information processing
supports querying, basic statistical analysis, and reporting using crosstabs, tables, charts and graphs
Analytical processing
multidimensional analysis of data warehouse data
supports basic OLAP operations, slice-dice, drilling, pivoting
Data mining
knowledge discovery from hidden patterns
supports associations, constructing analytical models, performing classification and prediction, and presenting the mining results using visualization tools.
Differences among the three tasks
From On-Line Analytical Processing to On Line Analytical Mining (OLAM)
Why online analytical mining?
High quality of data in data warehouses
DW contains integrated, consistent, cleaned data
Available information processing structure surrounding data warehouses
ODBC, OLEDB, Web accessing, service facilities, reporting and OLAP tools
OLAP-based exploratory data analysis
mining with drilling, dicing, pivoting, etc.
On-line selection of data mining functions
integration and swapping of multiple mining functions, algorithms, and tasks.
Architecture of OLAM
Summary
Data warehouse
A subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process
A multi-dimensional model of a data warehouse
Star schema, snowflake schema, fact constellations
A data cube consists of dimensions & measures
OLAP operations: drilling, rolling, slicing, dicing and pivoting
OLAP servers: ROLAP, MOLAP, HOLAP
Efficient computation of data cubes
Partial vs. full vs. no materialization
Multiway array aggregation
Bitmap index and join index implementations
Further development of data cube technology
Discovery-drive and multi-feature cubes
From OLAP to OLAM (on-line analytical mining)
Data Mining and Warehousing 20-`12-2007
Audience: III Year II sem engg. students + MCA II year Student
Jayaprakash narayan College of Engg. Mahboobnagar
Jpnce, 20-12-2007
CS05158: Data Warehousing and Mining
Lecture 1
• Course syllabus
• Overview of data warehousing and mining
Lecture slides modified from:
– Jiawei Han (http://www-sal.cs.uiuc.edu/~hanj/DM_Book.html)
– Vipin Kumar (http://www-users.cs.umn.edu/~kumar/csci5980/index.html)
– Ad Feelders (http://www.cs.uu.nl/docs/vakken/adm/)
– Zdravko Markov (http://www.cs.ccsu.edu/~markov/ccsu_courses/DataMining-1.html)
Rajesh Kulkarni
rrkpv2002@gmail.com
http://rkstechnofusion.blogspot.com
http://children-off-lesser-gods.blogspot.com
Course Syllabus
Textbook:
(required) J. Han, M. Kamber, Data Mining: Concepts and Techniques, 2001.
Data Mining Techniques by Arun K Pujari.
Datawarehousing in the Real World S Anahorey and D Murray
Topics: Unit 1
– Overview of data warehousing and mining
– Data Mining Functionalities
– Classification of Data Mining Systems
– Major Issues in Data Mining
– Data warehouse and OLAP technology for data mining
Motivation:
“Necessity is the Mother of Invention”
• Data explosion problem
– Automated data collection tools and mature database technology lead to tremendous amounts of data stored in databases, data warehouses and other information repositories
• We are drowning in data, but starving for knowledge!
• Solution: Data warehousing and data mining
– Data warehousing and on-line analytical processing
– Extraction of interesting knowledge (rules, regularities, patterns, constraints) from data in large databases
Why Mine Data? Commercial Viewpoint
• Lots of data is being collected
and warehoused
– Web data, e-commerce
– purchases at department/
grocery stores
– Bank/Credit Card
transactions
• Computers have become cheaper and more powerful
• Competitive Pressure is Strong
– Provide better, customized services for an edge (e.g. in Customer Relationship Management)
Why Mine Data? Scientific Viewpoint
• Data collected and stored at
enormous speeds (GB/hour)
– remote sensors on a satellite
– telescopes scanning the skies
– microarrays generating gene
expression data
– scientific simulations
generating terabytes of data
• Traditional techniques infeasible for raw data
• Data mining may help scientists
– in classifying and segmenting data
– in Hypothesis Formation
What Is Data Mining?
• Data mining (knowledge discovery in databases):
– Extraction of interesting (non-trivial, implicit, previously unknown and potentially useful) information or patterns from data in large databases
• Alternative names and their “inside stories”:
– Data mining: a misnomer?
– Knowledge discovery(mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, business intelligence, etc.
Examples: What is (not) Data Mining?
Data Mining: Classification Schemes
• Decisions in data mining
– Kinds of databases to be mined
– Kinds of knowledge to be discovered
– Kinds of techniques utilized
– Kinds of applications adapted
• Data mining tasks
– Descriptive data mining
– Predictive data mining
Decisions in Data Mining
• Databases to be mined
– Relational, transactional, object-oriented, object-relational, active, spatial, time-series, text, multi-media, heterogeneous, legacy, WWW, etc.
• Knowledge to be mined
– Characterization, discrimination, association, classification, clustering, trend, deviation and outlier analysis, etc.
– Multiple/integrated functions and mining at multiple levels
• Techniques utilized
– Database-oriented, data warehouse (OLAP), machine learning, statistics, visualization, neural network, etc.
• Applications adapted
– Retail, telecommunication, banking, fraud analysis, DNA mining, stock market analysis, Web mining, Weblog analysis, etc.
Data Mining Tasks
• Prediction Tasks
– Use some variables to predict unknown or future values of other variables
• Description Tasks
– Find human-interpretable patterns that describe the data.
Common data mining tasks
– Classification [Predictive]
– Clustering [Descriptive]
– Association Rule Discovery [Descriptive]
– Sequential Pattern Discovery [Descriptive]
– Regression [Predictive]
– Deviation Detection [Predictive]
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the attributes is the class.
• Find a model for class attribute as a function of the values of other attributes.
• Goal: previously unseen records should be assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Classification Example
Classification: Application 1
• Direct Marketing
– Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product.
– Approach:
• Use the data for a similar product introduced before.
• We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute.
• Collect various demographic, lifestyle, and company-interaction related information about all such customers.
– Type of business, where they stay, how much they earn, etc.
• Use this information as input attributes to learn a classifier model.
Classification: Application 2
• Fraud Detection
– Goal: Predict fraudulent cases in credit card transactions.
– Approach:
• Use credit card transactions and the information on its account-holder as attributes.
– When does a customer buy, what does he buy, how often he pays on time, etc
• Label past transactions as fraud or fair transactions. This forms the class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card transactions on an account.
Classification: Application 3
• Customer Attrition/Churn:
– Goal: To predict whether a customer is likely to be lost to a competitor.
– Approach:
• Use detailed record of transactions with each of the past and present customers, to find attributes.
– How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc.
• Label the customers as loyal or disloyal.
• Find a model for loyalty.
Classification: Application 4
• Sky Survey Cataloging
– Goal: To predict class (star or galaxy) of sky objects, especially visually faint ones, based on the telescopic survey images (from Palomar Observatory).
– 3000 images with 23,040 x 23,040 pixels per image.
– Approach:
• Segment the image.
• Measure image attributes (features) - 40 of them per object.
• Model the class based on these features.
• Success Story: Could find 16 new high red-shift quasars, some of the farthest objects that are difficult to find!
Classifying Galaxies
Clustering Definition
• Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that
– Data points in one cluster are more similar to one another.
– Data points in separate clusters are less similar to one another.
• Similarity Measures:
– Euclidean Distance if attributes are continuous.
– Other Problem-specific Measures.
Illustrating Clustering
Clustering: Application 1
• Market Segmentation:
– Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix.
– Approach:
• Collect different attributes of customers based on their geographical and lifestyle related information.
• Find clusters of similar customers.
• Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters.
Clustering: Application 2
• Document Clustering:
– Goal: To find groups of documents that are similar to each other based on the important terms appearing in them.
– Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster.
– Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents.
Association Rule Discovery: Definition
• Given a set of records each of which contain some number of items from a given collection;
– Produce dependency rules which will predict occurrence of an item based on occurrences of other items.
Association Rule Discovery: Application 1
• Marketing and Sales Promotion:
– Let the rule discovered be
{Bagels, … } --> {Potato Chips}
– Potato Chips as consequent => Can be used to determine what should be done to boost its sales.
– Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels.
– Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
Association Rule Discovery: Application 2
• Supermarket shelf management.
– Goal: To identify items that are bought together by sufficiently many customers.
– Approach: Process the point-of-sale data collected with barcode scanners to find dependencies among items.
– A classic rule --
• If a customer buys diaper and milk, then he is very likely to buy beer:
The Sad Truth About Diapers and Beer
• So, don’t be surprised if you find six-packs stacked next to diapers!
Sequential Pattern Discovery: Definition
Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events:
– In telecommunications alarm logs,
• (Inverter_Problem Excessive_Line_Current)
(Rectifier_Alarm) --> (Fire_Alarm)
– In point-of-sale transaction sequences,
• Computer Bookstore:
(Intro_To_Visual_C) (C++_Primer) --> (Perl_for_dummies,Tcl_Tk)
• Athletic Apparel Store:
(Shoes) (Racket, Racketball) --> (Sports_Jacket)
Regression
• Predict a value of a given continuous valued variable based on the values of other variables, assuming a linear or nonlinear model of dependency.
• Greatly studied in statistics, neural network fields.
• Examples:
– Predicting sales amounts of new product based on advetising expenditure.
– Predicting wind velocities as a function of temperature, humidity, air pressure, etc.
– Time series prediction of stock market indices.
Deviation/Anomaly Detection
• Detect significant deviations from normal behavior
• Applications:
– Credit Card Fraud Detection
– Network Intrusion
Detection
Data Mining and Induction Principle
Induction vs Deduction
• Deductive reasoning is truth-preserving:
– All horses are mammals
– All mammals have lungs
– Therefore, all horses have lungs
• Induction reasoning adds information:
– All horses observed so far have lungs.
– Therefore, all horses have lungs.
The Problems with Induction
From true facts, we may induce false models.
Prototypical example:
– European swans are all white.
– Induce: ”Swans are white” as a general rule.
– Discover Australia and black Swans...
– Problem: the set of examples is not random and representative
Another example: distinguish US tanks from Iraqi tanks
– Method: Database of pictures split in train set and test set; Classification model built on train set
– Result: Good predictive accuracy on test set;Bad score on independent pictures
– Why did it go wrong: other distinguishing features in the pictures (hangar versus desert)
Hypothesis-Based vs. Exploratory-Based
• The hypothesis-based method:
– Formulate a hypothesis of interest.
– Design an experiment that will yield data to test this hypothesis.
– Accept or reject hypothesis depending on the outcome.
• Exploratory-based method:
– Try to make sense of a bunch of data without an a priori hypothesis!
– The only prevention against false results is significance:
• ensure statistical significance (using train and test etc.)
• ensure domain significance (i.e., make sure that the results make sense to a domain expert)
Hypothesis-Based vs. Exploratory-Based
• Experimental Scientist:
– Assign level of fertilizer randomly to plot of land.
– Control for: quality of soil, amount of sunlight,...
– Compare mean yield of fertilized and unfertilized plots.
• Data Miner:
– Notices that the yield is somewhat higher under trees where birds roost.
– Conclusion: droppings increase yield.
– Alternative conclusion: moderate amount of shade increases yield.(“Identification Problem”)
Data Mining: A KDD Process
– Data mining: the core of knowledge discovery process.
Steps of a KDD Process
• Learning the application domain:
– relevant prior knowledge and goals of application
• Creating a target data set: data selection
• Data cleaning and preprocessing: (may take 60% of effort!)
• Data reduction and transformation:
– Find useful features, dimensionality/variable reduction, invariant representation.
• Choosing functions of data mining
– summarization, classification, regression, association, clustering.
• Choosing the mining algorithm(s)
• Data mining: search for patterns of interest
• Pattern evaluation and knowledge presentation
– visualization, transformation, removing redundant patterns, etc.
• Use of discovered knowledge
Data Mining and Business Intelligence
Data Mining: On What Kind of Data?
• Relational databases
• Data warehouses
• Transactional databases
• Advanced DB and information repositories
– Object-oriented and object-relational databases
– Spatial databases
– Time-series data and temporal data
– Text databases and multimedia databases
– Heterogeneous and legacy databases
– WWW
Data Mining: Confluence of Multiple Disciplines
Data Mining vs. Statistical Analysis
Statistical Analysis:
• Ill-suited for Nominal and Structured Data Types
• Completely data driven - incorporation of domain knowledge not possible
• Interpretation of results is difficult and daunting
• Requires expert user guidance
Data Mining:
• Large Data sets
• Efficiency of Algorithms is important
• Scalability of Algorithms is important
• Real World Data
• Lots of Missing Values
• Pre-existing data - not user generated
• Data not static - prone to updates
• Efficient methods for data retrieval available for use
Data Mining vs. DBMS
• Example DBMS Reports
– Last months sales for each service type
– Sales per service grouped by customer sex or age bracket
– List of customers who lapsed their policy
• Questions answered using Data Mining
– What characteristics do customers that lapse their policy have in common and how do they differ from customers who renew their policy?
– Which motor insurance policy holders would be potential customers for my House Content Insurance policy?
Data Mining and Data Warehousing
• Data Warehouse: a centralized data repository which can be queried for business benefit.
• Data Warehousing makes it possible to
– extract archived operational data
– overcome inconsistencies between different legacy data formats
– integrate data throughout an enterprise, regardless of location, format, or communication requirements
– incorporate additional or expert information
• OLAP: On-line Analytical Processing
• Multi-Dimensional Data Model (Data Cube)
• Operations:
– Roll-up
– Drill-down
– Slice and dice
– Rotate
DBMS, OLAP, and Data Mining
Example of DBMS, OLAP and Data Mining: Weather Data
Example of DBMS, OLAP and Data Mining: Weather Data
• By querying a DBMS containing the above table we may answer questions like:
• What was the temperature in the sunny days? {85, 80, 72, 69, 75}
• Which days the humidity was less than 75? {6, 7, 9, 11}
• Which days the temperature was greater than 70? {1, 2, 3, 8, 10, 11, 12, 13, 14}
• Which days the temperature was greater than 70 and the humidity was less than 75? The intersection of the above two: {11}
Example of DBMS, OLAP and Data Mining: Weather Data
OLAP:
• Using OLAP we can create a Multidimensional Model of our data (Data Cube).
• For example using the dimensions: time, outlook and play we can create the following model.
Major Issues in Data Warehousing and Mining
• Mining methodology and user interaction
– Mining different kinds of knowledge in databases
– Interactive mining of knowledge at multiple levels of abstraction
– Incorporation of background knowledge
– Data mining query languages and ad-hoc data mining
– Expression and visualization of data mining results
– Handling noise and incomplete data
– Pattern evaluation: the interestingness problem
• Performance and scalability
– Efficiency and scalability of data mining algorithms
– Parallel, distributed and incremental mining methods
Major Issues in Data Warehousing and Mining
• Issues relating to the diversity of data types
– Handling relational and complex types of data
– Mining information from heterogeneous databases and global information systems (WWW)
• Issues related to applications and social impacts
– Application of discovered knowledge
• Domain-specific data mining tools
• Intelligent query answering
• Process control and decision making
– Integration of the discovered knowledge with existing knowledge: A knowledge fusion problem
– Protection of data security, integrity, and privacy
Jayaprakash narayan College of Engg. Mahboobnagar
Jpnce, 20-12-2007
CS05158: Data Warehousing and Mining
Lecture 1
• Course syllabus
• Overview of data warehousing and mining
Lecture slides modified from:
– Jiawei Han (http://www-sal.cs.uiuc.edu/~hanj/DM_Book.html)
– Vipin Kumar (http://www-users.cs.umn.edu/~kumar/csci5980/index.html)
– Ad Feelders (http://www.cs.uu.nl/docs/vakken/adm/)
– Zdravko Markov (http://www.cs.ccsu.edu/~markov/ccsu_courses/DataMining-1.html)
Rajesh Kulkarni
rrkpv2002@gmail.com
http://rkstechnofusion.blogspot.com
http://children-off-lesser-gods.blogspot.com
Course Syllabus
Textbook:
(required) J. Han, M. Kamber, Data Mining: Concepts and Techniques, 2001.
Data Mining Techniques by Arun K Pujari.
Datawarehousing in the Real World S Anahorey and D Murray
Topics: Unit 1
– Overview of data warehousing and mining
– Data Mining Functionalities
– Classification of Data Mining Systems
– Major Issues in Data Mining
– Data warehouse and OLAP technology for data mining
Motivation:
“Necessity is the Mother of Invention”
• Data explosion problem
– Automated data collection tools and mature database technology lead to tremendous amounts of data stored in databases, data warehouses and other information repositories
• We are drowning in data, but starving for knowledge!
• Solution: Data warehousing and data mining
– Data warehousing and on-line analytical processing
– Extraction of interesting knowledge (rules, regularities, patterns, constraints) from data in large databases
Why Mine Data? Commercial Viewpoint
• Lots of data is being collected
and warehoused
– Web data, e-commerce
– purchases at department/
grocery stores
– Bank/Credit Card
transactions
• Computers have become cheaper and more powerful
• Competitive Pressure is Strong
– Provide better, customized services for an edge (e.g. in Customer Relationship Management)
Why Mine Data? Scientific Viewpoint
• Data collected and stored at
enormous speeds (GB/hour)
– remote sensors on a satellite
– telescopes scanning the skies
– microarrays generating gene
expression data
– scientific simulations
generating terabytes of data
• Traditional techniques infeasible for raw data
• Data mining may help scientists
– in classifying and segmenting data
– in Hypothesis Formation
What Is Data Mining?
• Data mining (knowledge discovery in databases):
– Extraction of interesting (non-trivial, implicit, previously unknown and potentially useful) information or patterns from data in large databases
• Alternative names and their “inside stories”:
– Data mining: a misnomer?
– Knowledge discovery(mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, business intelligence, etc.
Examples: What is (not) Data Mining?
Data Mining: Classification Schemes
• Decisions in data mining
– Kinds of databases to be mined
– Kinds of knowledge to be discovered
– Kinds of techniques utilized
– Kinds of applications adapted
• Data mining tasks
– Descriptive data mining
– Predictive data mining
Decisions in Data Mining
• Databases to be mined
– Relational, transactional, object-oriented, object-relational, active, spatial, time-series, text, multi-media, heterogeneous, legacy, WWW, etc.
• Knowledge to be mined
– Characterization, discrimination, association, classification, clustering, trend, deviation and outlier analysis, etc.
– Multiple/integrated functions and mining at multiple levels
• Techniques utilized
– Database-oriented, data warehouse (OLAP), machine learning, statistics, visualization, neural network, etc.
• Applications adapted
– Retail, telecommunication, banking, fraud analysis, DNA mining, stock market analysis, Web mining, Weblog analysis, etc.
Data Mining Tasks
• Prediction Tasks
– Use some variables to predict unknown or future values of other variables
• Description Tasks
– Find human-interpretable patterns that describe the data.
Common data mining tasks
– Classification [Predictive]
– Clustering [Descriptive]
– Association Rule Discovery [Descriptive]
– Sequential Pattern Discovery [Descriptive]
– Regression [Predictive]
– Deviation Detection [Predictive]
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the attributes is the class.
• Find a model for class attribute as a function of the values of other attributes.
• Goal: previously unseen records should be assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Classification Example
Classification: Application 1
• Direct Marketing
– Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product.
– Approach:
• Use the data for a similar product introduced before.
• We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute.
• Collect various demographic, lifestyle, and company-interaction related information about all such customers.
– Type of business, where they stay, how much they earn, etc.
• Use this information as input attributes to learn a classifier model.
Classification: Application 2
• Fraud Detection
– Goal: Predict fraudulent cases in credit card transactions.
– Approach:
• Use credit card transactions and the information on its account-holder as attributes.
– When does a customer buy, what does he buy, how often he pays on time, etc
• Label past transactions as fraud or fair transactions. This forms the class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card transactions on an account.
Classification: Application 3
• Customer Attrition/Churn:
– Goal: To predict whether a customer is likely to be lost to a competitor.
– Approach:
• Use detailed record of transactions with each of the past and present customers, to find attributes.
– How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc.
• Label the customers as loyal or disloyal.
• Find a model for loyalty.
Classification: Application 4
• Sky Survey Cataloging
– Goal: To predict class (star or galaxy) of sky objects, especially visually faint ones, based on the telescopic survey images (from Palomar Observatory).
– 3000 images with 23,040 x 23,040 pixels per image.
– Approach:
• Segment the image.
• Measure image attributes (features) - 40 of them per object.
• Model the class based on these features.
• Success Story: Could find 16 new high red-shift quasars, some of the farthest objects that are difficult to find!
Classifying Galaxies
Clustering Definition
• Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that
– Data points in one cluster are more similar to one another.
– Data points in separate clusters are less similar to one another.
• Similarity Measures:
– Euclidean Distance if attributes are continuous.
– Other Problem-specific Measures.
Illustrating Clustering
Clustering: Application 1
• Market Segmentation:
– Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix.
– Approach:
• Collect different attributes of customers based on their geographical and lifestyle related information.
• Find clusters of similar customers.
• Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters.
Clustering: Application 2
• Document Clustering:
– Goal: To find groups of documents that are similar to each other based on the important terms appearing in them.
– Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster.
– Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents.
Association Rule Discovery: Definition
• Given a set of records each of which contain some number of items from a given collection;
– Produce dependency rules which will predict occurrence of an item based on occurrences of other items.
Association Rule Discovery: Application 1
• Marketing and Sales Promotion:
– Let the rule discovered be
{Bagels, … } --> {Potato Chips}
– Potato Chips as consequent => Can be used to determine what should be done to boost its sales.
– Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels.
– Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
Association Rule Discovery: Application 2
• Supermarket shelf management.
– Goal: To identify items that are bought together by sufficiently many customers.
– Approach: Process the point-of-sale data collected with barcode scanners to find dependencies among items.
– A classic rule --
• If a customer buys diaper and milk, then he is very likely to buy beer:
The Sad Truth About Diapers and Beer
• So, don’t be surprised if you find six-packs stacked next to diapers!
Sequential Pattern Discovery: Definition
Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events:
– In telecommunications alarm logs,
• (Inverter_Problem Excessive_Line_Current)
(Rectifier_Alarm) --> (Fire_Alarm)
– In point-of-sale transaction sequences,
• Computer Bookstore:
(Intro_To_Visual_C) (C++_Primer) --> (Perl_for_dummies,Tcl_Tk)
• Athletic Apparel Store:
(Shoes) (Racket, Racketball) --> (Sports_Jacket)
Regression
• Predict a value of a given continuous valued variable based on the values of other variables, assuming a linear or nonlinear model of dependency.
• Greatly studied in statistics, neural network fields.
• Examples:
– Predicting sales amounts of new product based on advetising expenditure.
– Predicting wind velocities as a function of temperature, humidity, air pressure, etc.
– Time series prediction of stock market indices.
Deviation/Anomaly Detection
• Detect significant deviations from normal behavior
• Applications:
– Credit Card Fraud Detection
– Network Intrusion
Detection
Data Mining and Induction Principle
Induction vs Deduction
• Deductive reasoning is truth-preserving:
– All horses are mammals
– All mammals have lungs
– Therefore, all horses have lungs
• Induction reasoning adds information:
– All horses observed so far have lungs.
– Therefore, all horses have lungs.
The Problems with Induction
From true facts, we may induce false models.
Prototypical example:
– European swans are all white.
– Induce: ”Swans are white” as a general rule.
– Discover Australia and black Swans...
– Problem: the set of examples is not random and representative
Another example: distinguish US tanks from Iraqi tanks
– Method: Database of pictures split in train set and test set; Classification model built on train set
– Result: Good predictive accuracy on test set;Bad score on independent pictures
– Why did it go wrong: other distinguishing features in the pictures (hangar versus desert)
Hypothesis-Based vs. Exploratory-Based
• The hypothesis-based method:
– Formulate a hypothesis of interest.
– Design an experiment that will yield data to test this hypothesis.
– Accept or reject hypothesis depending on the outcome.
• Exploratory-based method:
– Try to make sense of a bunch of data without an a priori hypothesis!
– The only prevention against false results is significance:
• ensure statistical significance (using train and test etc.)
• ensure domain significance (i.e., make sure that the results make sense to a domain expert)
Hypothesis-Based vs. Exploratory-Based
• Experimental Scientist:
– Assign level of fertilizer randomly to plot of land.
– Control for: quality of soil, amount of sunlight,...
– Compare mean yield of fertilized and unfertilized plots.
• Data Miner:
– Notices that the yield is somewhat higher under trees where birds roost.
– Conclusion: droppings increase yield.
– Alternative conclusion: moderate amount of shade increases yield.(“Identification Problem”)
Data Mining: A KDD Process
– Data mining: the core of knowledge discovery process.
Steps of a KDD Process
• Learning the application domain:
– relevant prior knowledge and goals of application
• Creating a target data set: data selection
• Data cleaning and preprocessing: (may take 60% of effort!)
• Data reduction and transformation:
– Find useful features, dimensionality/variable reduction, invariant representation.
• Choosing functions of data mining
– summarization, classification, regression, association, clustering.
• Choosing the mining algorithm(s)
• Data mining: search for patterns of interest
• Pattern evaluation and knowledge presentation
– visualization, transformation, removing redundant patterns, etc.
• Use of discovered knowledge
Data Mining and Business Intelligence
Data Mining: On What Kind of Data?
• Relational databases
• Data warehouses
• Transactional databases
• Advanced DB and information repositories
– Object-oriented and object-relational databases
– Spatial databases
– Time-series data and temporal data
– Text databases and multimedia databases
– Heterogeneous and legacy databases
– WWW
Data Mining: Confluence of Multiple Disciplines
Data Mining vs. Statistical Analysis
Statistical Analysis:
• Ill-suited for Nominal and Structured Data Types
• Completely data driven - incorporation of domain knowledge not possible
• Interpretation of results is difficult and daunting
• Requires expert user guidance
Data Mining:
• Large Data sets
• Efficiency of Algorithms is important
• Scalability of Algorithms is important
• Real World Data
• Lots of Missing Values
• Pre-existing data - not user generated
• Data not static - prone to updates
• Efficient methods for data retrieval available for use
Data Mining vs. DBMS
• Example DBMS Reports
– Last months sales for each service type
– Sales per service grouped by customer sex or age bracket
– List of customers who lapsed their policy
• Questions answered using Data Mining
– What characteristics do customers that lapse their policy have in common and how do they differ from customers who renew their policy?
– Which motor insurance policy holders would be potential customers for my House Content Insurance policy?
Data Mining and Data Warehousing
• Data Warehouse: a centralized data repository which can be queried for business benefit.
• Data Warehousing makes it possible to
– extract archived operational data
– overcome inconsistencies between different legacy data formats
– integrate data throughout an enterprise, regardless of location, format, or communication requirements
– incorporate additional or expert information
• OLAP: On-line Analytical Processing
• Multi-Dimensional Data Model (Data Cube)
• Operations:
– Roll-up
– Drill-down
– Slice and dice
– Rotate
DBMS, OLAP, and Data Mining
Example of DBMS, OLAP and Data Mining: Weather Data
Example of DBMS, OLAP and Data Mining: Weather Data
• By querying a DBMS containing the above table we may answer questions like:
• What was the temperature in the sunny days? {85, 80, 72, 69, 75}
• Which days the humidity was less than 75? {6, 7, 9, 11}
• Which days the temperature was greater than 70? {1, 2, 3, 8, 10, 11, 12, 13, 14}
• Which days the temperature was greater than 70 and the humidity was less than 75? The intersection of the above two: {11}
Example of DBMS, OLAP and Data Mining: Weather Data
OLAP:
• Using OLAP we can create a Multidimensional Model of our data (Data Cube).
• For example using the dimensions: time, outlook and play we can create the following model.
Major Issues in Data Warehousing and Mining
• Mining methodology and user interaction
– Mining different kinds of knowledge in databases
– Interactive mining of knowledge at multiple levels of abstraction
– Incorporation of background knowledge
– Data mining query languages and ad-hoc data mining
– Expression and visualization of data mining results
– Handling noise and incomplete data
– Pattern evaluation: the interestingness problem
• Performance and scalability
– Efficiency and scalability of data mining algorithms
– Parallel, distributed and incremental mining methods
Major Issues in Data Warehousing and Mining
• Issues relating to the diversity of data types
– Handling relational and complex types of data
– Mining information from heterogeneous databases and global information systems (WWW)
• Issues related to applications and social impacts
– Application of discovered knowledge
• Domain-specific data mining tools
• Intelligent query answering
• Process control and decision making
– Integration of the discovered knowledge with existing knowledge: A knowledge fusion problem
– Protection of data security, integrity, and privacy
Subscribe to:
Posts (Atom)