Domain-Sensitive Recommendation with User-Item Subgroup Analysis
LAST decades have witnessed the overwhelming supply of online information with the evolution of the Internet. Thus, recommender systems have been indispensable nowadays, which support users with possibly different judgments and opinions in their quest for information, through taking into account the diversity of preferences and the relativity of information value. Collaborative Filtering (CF) is an effective and widely adopted recommendation approach. Different from content-based recommender systems which rely on the profiles of users and items for predictions, CF approaches make predictions by only utilizing the user-item interaction information such as transaction history or item satisfaction expressed in ratings, etc. As more attention is paid on personal privacy, CF systems become increasingly popular, since they do not require users to explicitly state their personal information limit the performance of typical CF methods. On one hand, user’s interests always center on some specific domains but not all the domains. However, typical CF approaches do not treat these domains distinctively. On the other hand, the fundamental assumption for typical CF approaches is that users rate similarly on partial items, and hence they will rate on all the other items similarly. However, it is observed that this assumption is not always so tenable. Usually, the collaborative effect among users varies across different domains. In other words, two users have similar tastes in one domain cannot infer that they have similar taste in other domain. Taking an intuitive example, two users who love romantic movies probably have totally different preference in action movies. Thus, it is more reasonable and necessary to automatically mine different domains and perform domain sensitive CF for recommender systems. Numerous efforts have been paid on this direction. These can be divided into two types. The first type is to discover domains with the help of external information such as social trust network 2, product category information 3, etc. In this paper we focus on the second type called clustering CF, which only exploits the user-item interaction information and detects the domains by clustering methods. Among algorithms of this type, some are one side clustering in the sense that they only consider to cluster either items or users 4, 5, 6, 7, 8. And others are two side clustering, which make use of the duality between users and items to partition both dimensions simultaneously 9, 10, 11, 12, 13. In most of clustering CF approaches, each user or item is assigned to a single cluster (domain). However, in reality, the user interests and item attributes are not always exclusive, e.g., a user likes romantic movies does not means the user does not like other genre movies, and a romantic movie could also be an war movie. Thus, it is more natural to assume that a user or an item can join multiple domains. Besides, most of these clustering CF approaches are performed in a two-stage sequential process: domain detection by clustering and rating prediction by typical CF within the clusters. One advantage of this approach is to overcome the problem of scalability brought by many memory-based CF techniques where the heavy computational burden is brought by the similarity calculations. However, such divide-and-conquer style brings a new problem, i.e., the algorithm cannot take full advantage of the observed rating data which is limited and precious.\ To address above problems, in this paper, we propose a novel Domain-sensitive Recommendation (DsRec) algorithm assisted with the user-item subgroup analysis, which integrates rating prediction and domain detection into a unified framework. We call the proposed algorithm DsRec for short, and illustrate its basic architecture in Fig. 1. There are three components in the unified framework. First, we apply a matrix factorization model to best reconstruct the observed rating data with the learned latent factor representations of both users and items, with which those unobserved ratings to users and items can be predicted directly. Second, a bi-clustering model is used to learn the confidence distribution of each user and item belonging to different domains. Actually, a specific domain is a user-item subgroup, which consists of a subset of items with similar attributes and a subset of users interesting in the subset of items. In the bi-clustering formulation, we assume that a high rating score rated by a user to an item encourages the user and the item to be assigned to the same subgroups together. Additionally, two regression regularization items are imported to build a bridge between the confidence distribution of users (items) and the corresponding latent factor representations. That is, the confidence distribution over different subgroups (domains) in DsRec could be considered as soft pseudo domain labels, to guide the exploration of the latent space. Thus, connected with the regression regularizations, DsRec could learn discriminative and domain-sensitive latent spaces of users and items to perform the tasks of rating prediction and domain identification. To the best of our knowledge, our work is the first to jointly consider the both tasks by only utilizing user-item interaction information. An alternate optimization scheme is developed to solve the unified objective function, and the experimental analysis on three real-world datasets demonstrates the effectiveness of our method.
Collaborative Filtering (CF) is one of the most successful recommendation approaches to cope with information overload in the real world. However, typical CF methods equally treat every user and item, and cannot distinguish the variation of user’s interests across different domains. This violates the reality that user’s interests always center on some specific domains, and the users having similar tastes on one domain may have totally different tastes on another domain. Motivated by the observation, in this paper, we propose a novel Domain-sensitive Recommendation (DsRec) algorithm, to make the rating prediction by exploring the user-item subgroup analysis simultaneously, in which a user-item subgroup is deemed as a domain consisting of a subset of items with similar attributes and a subset of users who have interests in these items. The proposed framework of DsRec includes three components: a matrix factorization model for the observed rating reconstruction, a bi-clustering model for the user-item subgroup analysis, and two regularization terms to connect the above two components into a unified formulation. Extensive experiments on Movielens-100K and two real-world product review datasets show that our method achieves the better performance in terms of prediction accuracy criterion over the state-of-the-art methods.
• Existing recommender systems have been indispensable nowadays, which support users with possibly different judgments and opinions in their quest for information, through taking into account the diversity of preferences and the relativity of information value.
• Collaborative Filtering (CF) is an effective and widely adopted recommendation approach. Different from content-based recommender systems which rely on the profiles of users and items for predictions, CF approaches make predictions by only utilizing the user-item interaction information such as transaction history or item satisfaction expressed in ratings, etc. As more attention is paid on personal privacy, CF systems become increasingly popular, since they do not require users to explicitly state their personal information.
• Besides, most of these clustering CF approaches are performed in a two-stage sequential process: domain detection by clustering and rating prediction by typical CF within the clusters.
DISADVANTAGES OF EXISTING SYSTEM:
• The existing system has some problems which might limit the performance of typical CF methods.
• However, it is observed that this assumption is not always so tenable. Usually, the collaborative effect among users varies across different domains.
• However, such divide-and-conquer style brings a new problem, i.e., the algorithm cannot take full advantage of the observed rating data which is limited and precious
• We propose a novel Domain-sensitive Recommendation (DsRec) algorithm, to make the rating prediction by exploring the user-item subgroup analysis simultaneously, in which a user-item subgroup is deemed as a domain consisting of a subset of items with similar attributes and a subset of users who have interests in these items.
• The proposed framework of DsRec includes three components: a matrix factorization model for the observed rating reconstruction, a bi-clustering model for the user-item subgroup analysis, and two regularization terms to connect the above two components into a unified formulation.
• Extensive experiments on Movielens-100K and two real-world product review datasets show that our method achieves the better performance in terms of prediction accuracy criterion over the state-of-the-art methods.
• There are three components in the unified framework.
• First, we apply a matrix factorization model to best reconstruct the observed rating data with the learned latent factor representations of both users and items, with which those unobserved ratings to users and items can be predicted directly.
• Second, a bi-clustering model is used to learn the confidence distribution of each user and item belonging to different domains. Actually, a specific domain is a user-item subgroup, which consists of a subset of items with similar attributes and a subset of users interesting in the subset of items. In the bi-clustering formulation, we assume that a high rating score rated by a user to an item encourages the user and the item to be assigned to the same subgroups together.
ADVANTAGES OF PROPOSED SYSTEM:
• Develop a novel Domain-sensitive Recommendation algorithm, which makes rating prediction assisted with the user-item subgroup analysis.
• DsRec is a unified formulation integrating a matrix factorization model for rating prediction and a bi-clustering model for domain detection.
2. SYSTEM STUDY
2.1 FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal for the project and some cost estimates. The feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system. Some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
• ECONOMICAL FEASIBILITY
• TECHNICAL FEASIBILITY
• SOCIAL FEASIBILITY
SYSTEM DESIGN AND DEVELOPMENT
Input Design plays a vital role in the life cycle of software development, it requires very careful attention of developers. The input design is to feed data to the application as accurate as possible. So inputs are supposed to be designed effectively so that the errors occurring while feeding are minimized. According to Software Engineering Concepts, the input forms or screens are designed to have a validation control over the input limit, range.
This system has input screens in all the modules. Error messages are developed to alert the user whenever he commits some mistakes and guides him in the right way so that invalid entries are not made. Let us see about this under module design.
Validations are required for 0data entered. Whenever a user enters an erroneous data, error message is displayed and the user can move on to the subsequent pages after completing all the entries in the current page.
The Output from the computer is required to mainly create an efficient method of communication within the company primarily among the project leader and his team members, in other words, the administrator and the clients. After completion of a project, a new project may be assigned to the client. User authentication procedures are maintained at the initial stages itself. A new user may be created by the administrator himself or a user can himself register as a new user but the task of assigning projects and validating a new user rests with the administrator only.
The application starts running when it is executed for the first time. The server has to be started and then the internet explorer in used as the browser. The project will run on the local area network so the server machine will serve as the administrator while the other connected systems can act as the clients. The developed system is friendly and can be easily understood by anyone using it even for the first time.
This study is carried out to check the economic impact that the system will have on the organization. The company can pour into the research and development of the system is limited. The expenditures to be justified. Thus the developed system as well budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY :
This study is to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have high demand on the available technical resources. This will lead to high demands on available technical resources. This will lead to high demands to the client. The developed system must have a modest requirement, as only minimal are required for implementing this system.
The aspect of study is to check the acceptance of the system by the user. This includes the process of training to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
In this module, the Admin has to login by using valid user name and password. After login successful he can perform some operations such as view and authorize users, Adding Category as Domains, Viewing all Friend Request and Responses, Adding Posts by selecting Domains, Viewing all Posts with Rating based on ranks, View User Query Keyword and Analyze the Query Sub-Group, View all Recommended Products by Collaborating Filtering Method, Categorize Users based on Product Consumes with User Images and View Products Rank Results.
Viewing and Authorizing Users:
In this module, the admin views all users details and authorize them for login permission. User Details such as User Name, Address, Email Id and Mobile Number.
Add and View Category as Domain:
In this module, the admin adds Categories like Movie, Products, and Sports etc.
Add Posts as Products:
In this module, the admin can add Posts by Selecting Domains and by Providing Posts Details Such as, Post Name, Description, Images and Uses.
View all Posts with Rating based on Ranks:
In this module, admin can see all his added posts with details (Post Name, Description, Uses and Images) along with Rating and Rank. Rating is Calculated Based on Ranks.
View User Query Keyword and Analyze the Query Subgroup:
In this, the admin can see all the query keyword used by the users to search for posts and the Exact Matched Posts and the Query Subgroup (Posts which come under Matched Posts Category).
View all Recommended Products by Collaborating Filtering Method:
In this, the admin can see all the posts which are recommended by the users to their friends. Recommended posts can be seen by selecting particular Category.
Categorize Users Based on Products Consumes with user Images:
In this, the admin can view all the users who are all liked a particular post and who are all recommended a particular post. The result can be seen in a design graph by selecting a particular post name.
View Product Rank Results:
In this, the admin can view products ranks in a graph. The Rank is calculated based on the number of likes made on particular post.
In this module, there are n numbers of users are present. User should register before performing any operations. Once user registers, their details will be stored to the database. After registration successful, he has to login by using authorized user name and password. Once Login is successful user can perform some operations like viewing their profile details, Searching Friends, Viewing all Friends, Searching Posts by query keyword and Recommend to Friends, View and Delete User Friends, View all Friends Recommendation to User, View friends products consumes details with their images .
Viewing Profile Details:
In this module, the user can see their own profile details, such as their address, email, mobile number, profile Image.
Search Friends, Request, and View Friend Requests, View all Friend Details:
In this, the user search for other users by their names, send requests and view friend requests from other users. User can see all his friend details with their images and personnel details.
Search Query by keyword and Display Exact and Subgroup Results:
In this, the user can search for post by query keyword and the results will displayed in as two groups. The one is exactly matched posts and the other is posts which are all belongs to matched post’s categories.
The user can like or dislike and can recommend found posts to their friends by giving their opinion on that post.
View all Your Friends and Delete those you don’t want:
In this, the user can view all his friends and can delete them those you don’t want and by giving a reason for deletion. These details will be seen by the admin.
View all Your Friends Recommended Posts to You:
In this, the user can view all his friends recommended posts to user. The user can view recommended post details with a friend opinion on that post.
View Your Friends Products Consumes details with their images:
In this, the user can view all his friends products consumes details that is, if the friend liked or recommended on any post, those details will be shown in a design with friend detail.
With the varied topic in existence in the fields of computers, Client Server is one, which has generated more heat than light, and also more hype than reality. This technology has acquired a certain critical mass attention with its dedication conferences and magazines. Major computer vendors such as IBM and DEC, have declared that Client Servers is their main future market. A survey of DBMS magazine reveled that 76% of its readers were actively looking at the client server solution. The growth in the client server development tools from $200 million in 1992 to more than $1.2 billion in 1996.
Client server implementations are complex but the underlying concept is simple and powerful. A client is an application running with local resources but able to request the database and relate the services from separate remote server. The software mediating this client server interaction is often referred to as MIDDLEWARE.
The typical client either a PC or a Work Station connected through a network to a more powerful PC, Workstation, Midrange or Main Frames server usually capable of handling request from more than one client. However, with some configuration server may also act as client. A server may need to access other server in order to process the original client request.
The key client server idea is that client as user is essentially insulated from the physical location and formats of the data needs for their application. With the proper middleware, a client input from or report can transparently access and manipulate both local database on the client machine and remote databases on one or more servers. An added bonus is the client server opens the door to multi-vendor database access indulging heterogeneous table joins.
What is a Client Server?
Two prominent systems in existence are client server and file server systems. It is essential to distinguish between client servers and file server systems. Both provide shared network access to data but the comparison dens there! The file server simply provides a remote disk drive that can be accessed by LAN applications on a file by file basis. The client server offers full relational database services such as SQL-Access, Record modifying, Insert, Delete with full relational integrity backup/ restore performance for high volume of transactions, etc. the client server middleware provides a flexible interface between client and server, who does what, when and to whom.
Why Client Server?
Client server has evolved to solve a problem that has been around since the earliest days of computing: how best to distribute your computing, data generation and data storage resources in order to obtain efficient, cost effective departmental an enterprise wide data processing. During mainframe era choices were quite limited. A central machine housed both the CPU and DATA (cards, tapes, drums and later disks). Access to these resources was initially confined to batched runs that produced departmental reports at the appropriate intervals. A strong central information service department ruled the corporation. The role of the rest of the corporation limited to requesting new or more frequent reports and to provide hand written forms from which the central data banks were created and updated. The earliest client server solutions therefore could best be characterized as “SLAVE-MASTER”.
Time-sharing changed the picture. Remote terminal could view and even change the central data, subject to access permissions. And, as the central data banks evolved in to sophisticated relational database with non-programmer query languages, online users could formulate adhoc queries and produce local reports with out adding to the MIS applications software backlog. However remote access was through dumb terminals, and the client server remained subordinate to the Slave\Master.
Front end or User Interface Design:
The entire user interface is planned to be developed in browser specific environment with a touch of Intranet-Based Architecture for achieving the Distributed Concept.
The browser specific components are designed by using the HTML standards, and the dynamism of the designed by concentrating on the constructs of the Java Server Pages.
Communication or Database Connectivity Tier:
The Communication architecture is designed by concentrating on the Standards of Servlets and Enterprise Java Beans. The database connectivity is established by using the Java Data Base Connectivity.
The standards of three-tire architecture are given major concentration to keep the standards of higher cohesion and limited coupling for effectiveness of the operations.
Features of The Language Used
In my project, I have chosen Java language for developing the code.
Initially the language was called as “oak” but it was renamed as “Java” in 1995. The primary motivation of this language was the need for a platform-independent (i.e., architecture neutral) language that could be used to create software to be embedded in various consumer electronic devices.
Java is a programmer’s language, it is a cohesive, consistent and constraints imposed by the Internet environment, Java gives the programmer, full control.
Finally, Java is to Internet programming where C was to system programming.
Importance of Java to the Internet:
Java has had a profound effect on the Internet. This is because; Java expands the Universe of objects that can move about freely in Cyberspace. In a network, two categories of objects are transmitted between the Server and the Personal computer. They are: Passive information and Dynamic active programs. The Dynamic, Self-executing programs cause serious problems in the areas of Security and probability. But, Java addresses those concerns and by doing so, has opened the door to an exciting new form of program called the Applet.
Java can be used to create two types of programs:
Applications and Applets: An application is a program that runs on our Computer under the operating system of that computer. It is more or less like one creating using C or C++. Java’s ability to create Applets makes it important. An Applet is an application designed to be transmitted over the Internet and executed by a Java –compatible web browser. An applet is actually a tiny Java program, dynamically downloaded across the network, just like an image. But the difference is, it is an intelligent program, not just a media file. It can react to the user input and dynamically change.
Features Of Java
Every time you that you download a “normal” program, you are risking a viral infection. Prior to Java, most users did not download executable programs frequently, and those who did scanned them for viruses prior to execution. Most users still worried about the possibility of infecting their systems with a virus. In addition, another type of malicious program exists that must be guarded against. This type of program can gather private information, like credit card numbers, bank account balances, and passwords. Java answers both these concerns by providing a “firewall” between a network application and your computer.
When you use a Java-compatible Web browser, you can safely download Java applets without fear of virus infection or malicious intent.
The Byte code:
The key that allows the Java to solve the security and portability problems is that the output of Java compiler is Byte code. Byte code is a highly optimized set of instructions designed to be executed by the Java run-time system, which is called the Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it easier to run a program in a wide variety of environments. The reason is, once the run-time package exists for a given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java that prevents on-the-fly compilation of byte code into native code. Sun has just completed its Just In Time (JIT) compiler for byte code. JIT compiler is a part of JVM, it compiles byte code into executable code in real time. It is not possible to compile an entire Java program into executable code, because Java performs can be done only at run time. The JIT compiles code, as it is needed, during execution.
Java, Virtual Machine (JVM):
Beyond the language, there is the Java virtual machine. The Java virtual machine is an important element of the Java technology. The virtual machine can be embedded within a web browser or an operating system. Once a piece of Java code is loaded onto a machine, it is verified. As part of the loading process, a class loader is invoked and does byte code verification makes sure that the code that’s has been generated by the compiler will not corrupt the machine that it’s loaded on. Byte code verification takes place at the end of the compilation process to make sure that is all accurate and correct. So byte code verification is integral to the compiling and executing of Java code.
Picture showing the development process of JAVA Program
Java programming uses to produce byte codes and executes them. The first box indicates that the Java source code is located in a. Java file that is processed with a Java compiler called javac. The Java compiler produces a file called a. class file, which contains the byte code. The. Class file is then loaded across the network or loaded locally on your machine into the execution environment is the Java virtual machine, which interprets and executes the byte code.
Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet.
Compilation of code:
When you compile the code, the Java compiler creates machine code (called byte code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute the byte code. The JVM is created for overcoming the issue of portability. The code is written and compiled for one machine and interpreted on all machines. This machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code
During run-time the Java interpreter tricks the byte code file into thinking that it is running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95 or Sun SARC station running Solaris or Apple Macintosh running system and all could receive code from any computer through Internet and run the Applets.
Java was designed for the Professional programmer to learn and use effectively. If you are an experienced C++ programmer, learning Java will be even easier. Because Java inherits the C/C++ syntax and many of the object oriented features of C++. Some of the confusing concepts from C++ are either left out Java or implemented in a cleaner with more approachable manner. In Java there are a small number of clearly defined ways to accomplish a given task.
Java was not designed to source-code compatible with any other language. The object model in Java is simple and easy to extend, such as integers, are kept as high-performance non-objects.
The multi-platform environment of the Web demands on a program, because the program must execute reliably in variety of systems. The ability to create robust programs was given a high priority in the design. Java is strictly typed language; it checks your code at run time.
Java virtually eliminates the problems of memory management and deallocation, which is completely automatic. In a well-written Java program, all run time errors can –and should –be managed by your program.
? Validate the contents of a form and make calculations.
? Add scrolling or changing messages to the Browser’s status line.
? Animate images or rotate images that change when we move the mouse over them.
? Detect the browser in use and display different content for different browsers.
? Detect installed plug-ins and notify the user if a plug-in is required.
? It is more flexible than VBScript.
Hyper Text Markup Language:
Hypertext Markup Language (HTML), the languages of the World Wide Web (WWW), allows users to produces Web pages that include text, graphics and pointer to other Web pages (Hyperlinks).
HTML is not a programming language but it is an application of ISO Standard, SGML (Standard Generalized Markup Language), but specialized to hypertext and adapted to the Web. The idea behind Hypertext is that instead of reading text in rigid linear structure, we can easily jump from one point to another point. We can navigate through the information based on our interest and preference. A markup language is simply a series of elements, each delimited with special characters that define how text or other items enclosed within the elements should be displayed. Hyperlinks are underlined or emphasized works that load to other documents or some portions of the same document.
HTML can be used to display any type of document on the host computer, which can be geographically at a different location. It is a versatile language and can be used on any platform or desktop.
HTML provides tags (special codes) to make the document look attractive. HTML tags are not case-sensitive. Using graphics, fonts, different sizes, color, etc., can enhance the presentation of the document. Anything that is not a tag is part of the document itself.
Basic HTML Tags :
………. Creates hypertext links
………. Formats text as bold
………. Formats text in large font.
… Contains all tags and text in the HTML document
… Creates text
… Definition of a term
… Creates definition list
… Formats text with a particular font
… Encloses a fill-out form
… Defines a particular frame in a set of frames
… Creates headings of different levels
… Contains tags that specify information about a document
… Creates a horizontal rule
… Contains all other HTML tags
… Provides meta-information about a document
Contains client-side or server-side script
… Creates a table
… Indicates table data in a table
… Designates a table row
… Creates a heading in a table
? A HTML document is small and hence easy to send over the net. It is small because it does not include formatted information.
? HTML is platform independent.
? HTML tags are not case-sensitive.
Java Database Connectivity
What Is JDBC?
JDBC is a Java API for executing SQL statements. (As a point of interest, JDBC is a trademarked name and is not an acronym; nevertheless, JDBC is often thought of as standing for Java Database Connectivity. It consists of a set of classes and interfaces written in the Java programming language. JDBC provides a standard API for tool/database developers and makes it possible to write database applications using a pure Java API.
Using JDBC, it is easy to send SQL statements to virtually any relational database. One can write a single program using the JDBC API, and the program will be able to send SQL statements to the appropriate database. The combinations of Java and JDBC lets a programmer write it once and run it anywhere.
What Does JDBC Do?
Simply put, JDBC makes it possible to do three things:
? Establish a connection with a database
? Send SQL statements
? Process the results.
JDBC versus ODBC and other APIs
At this point, Microsoft’s ODBC (Open Database Connectivity) API is that probably the most widely used programming interface for accessing relational databases. It offers the ability to connect to almost all databases on almost all platforms.
So why not just use ODBC from Java? The answer is that you can use ODBC from Java, but this is best done with the help of JDBC in the form of the JDBC-ODBC Bridge, which we will cover shortly. The question now becomes “Why do you need JDBC?” There are several answers to this question:
1. ODBC is not appropriate for direct use from Java because it uses a C interface. Calls from Java to native C code have a number of drawbacks in the security, implementation, robustness, and automatic portability of applications.
2. A literal translation of the ODBC C API into a Java API would not be desirable. For example, Java has no pointers, and ODBC makes copious use of them, including the notoriously error-prone generic pointer “void *”. You can think of JDBC as ODBC translated into an object-oriented interface that is natural for Java programmers.
3. ODBC is hard to learn. It mixes simple and advanced features together, and it has complex options even for simple queries. JDBC, on the other hand, was designed to keep simple things simple while allowing more advanced capabilities where required.
4. A Java API like JDBC is needed in order to enable a “pure Java” solution. When ODBC is used, the ODBC driver manager and drivers must be manually installed on every client machine. When the JDBC driver is written completely in Java, however, JDBC code is automatically installable, portable, and secure on all Java platforms from network computers to mainframes.
Two-tier and Three-tier Models
The JDBC API supports both two-tier and three-tier models for database access.
In the two-tier model, a Java applet or application talks directly to the database. This requires a JDBC driver that can communicate with the particular database management system being accessed. A user’s SQL statements are delivered to the database, and the results of those statements are sent back to the user. The database may be located on another machine to which the user is connected via a network. This is referred to as a client/server configuration, with the user’s machine as the client, and the machine housing the database as the server. The network can be an Intranet, which, for example, connects employees within a corporation, or it can be the Internet.
In the three-tier model, commands are sent to a “middle tier” of services, which then send SQL statements to the database. The database processes the SQL statements and sends the results back to the middle tier, which then sends them to the user. MIS directors find the three-tier model very attractive because the middle tier makes it possible to maintain control over access and the kinds of updates that can be made to corporate data. Another advantage is that when there is a middle tier, the user can employ an easy-to-use higher-level API which is translated by the middle tier into the appropriate low-level calls. Finally, in many cases the three-tier architecture can provide performance advantages.
Until now the middle tier has typically been written in languages such as C or C++, which offer fast performance. However, with the introduction of optimizing compilers that translate Java byte code into efficient machine-specific code, it is becoming practical to implement the middle tier in Java. This is a big plus, making it possible to take advantage of Java’s robustness, multithreading, and security features. JDBC is important to allow database access from a Java middle tier.
JDBC Driver Types:
The JDBC drivers that we are aware of at this time fit into one of four categories:
? JDBC-ODBC bridge plus ODBC driver
? Native-API partly-Java driver
? JDBC-Net pure Java driver
? Native-protocol pure Java driver
If possible, use a Pure Java JDBC driver instead of the Bridge and an ODBC driver. This completely eliminates the client configuration required by ODBC. It also eliminates the potential that the Java VM could be corrupted by an error in the native code brought in by the Bridge (that is, the Bridge native library, the ODBC driver manager library, the ODBC driver library, and the database client library).
What Is the JDBC- ODBC Bridge?
The JDBC-ODBC Bridge is a JDBC driver, which implements JDBC operations by translating them into ODBC operations. To ODBC it appears as a normal application program. The Bridge implements JDBC for any database for which an ODBC driver is available. The Bridge is implemented as the
sun.jdbc.odbc Java package and contains a native library used to access ODBC. The Bridge is a joint development of Intersolv and JavaSoft.
Java Server Pages (JSP)
Java server Pages is a simple, yet powerful technology for creating and maintaining dynamic-content web pages. Based on the Java programming language, Java Server Pages offers proven portability, open standards, and a mature re-usable component model .The Java Server Pages architecture enables the separation of content generation from content presentation. This separation not eases maintenance headaches, it also allows web team members to focus on their areas of expertise. Now, web page designer can concentrate on layout, and web application designers on programming, with minimal concern about impacting each other’s work.
Features of JSP
Java Server Pages files can be run on any web server or web-enabled application server that provides support for them. Dubbed the JSP engine, this support involves recognition, translation, and management of the Java Server Page lifecycle and its interaction components.
It was mentioned earlier that the Java Server Pages architecture can include reusable Java components. The architecture also allows for the embedding of a scripting language directly into the Java Server Pages file. The components current supported include Java Beans, and Servlets.
A Java Server Pages file is essentially an HTML document with JSP scripting or tags. The Java Server Pages file has a JSP extension to the server as a Java Server Pages file. Before the page is served, the Java Server Pages syntax is parsed and processed into a Servlet on the server side. The Servlet that is generated outputs real content in straight HTML for responding to the client.
A Java Server Pages file may be accessed in at least two different ways. A client’s request comes directly into a Java Server Page. In this scenario, suppose the page accesses reusable Java Bean components that perform particular well-defined computations like accessing a database. The result of the Beans computations, called result sets is stored within the Bean as properties. The page uses such Beans to generate dynamic content and present it back to the client.
In both of the above cases, the page could also contain any valid Java code. Java Server Pages architecture encourages separation of content from presentation.
Steps in the execution of a JSP Application:
1. The client sends a request to the web server for a JSP file by giving the name of the JSP file within the form tag of a HTML page.
2. This request is transferred to the JavaWebServer. At the server side JavaWebServer receives the request and if it is a request for a jsp file server gives this request to the JSP engine.
3. JSP engine is program which can understands the tags of the jsp and then it converts those tags into a Servlet program and it is stored at the server side. This Servlet is loaded in the memory and then it is executed and the result is given back to the JavaWebServer and then it is transferred back to the result is given back to the JavaWebServer and then it is transferred back to the client.
The JDBC provides database-independent connectivity between the J2EE platform and a wide range of tabular data sources. JDBC technology allows an Application Component Provider to:
• Perform connection and authentication to a database server
• Manager transactions
• Move SQL statements to a database engine for preprocessing and execution
• Execute stored procedures
• Inspect and modify the results from Select statements.
Tomcat 6.0 web server
Tomcat is an open source web server developed by Apache Group. Apache Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and Java Server Pages technologies. The Java Servlet and Java Server Pages specifications are developed by Sun under the Java Community Process. Web Servers like Apache Tomcat support only web components while an application server supports web components as well as business components (BEAs Weblogic, is one of the popular application server).To develop a web application with jsp/servlet install any web server like JRun, Tomcat etc to run your application.
References for the Project Development were taken from the following Books and Web Sites.
PL/SQL Programming by Scott Urman
SQL complete reference by Livion
JAVA Complete Reference
Java Script Programming by Yehuda Shiran
Mastering JAVA Security
JAVA2 Networking by Pistoria
JAVA Security by Scotl oaks
Head First EJB Sierra Bates
J2EE Professional by Shadab siddiqui
JAVA server pages by Larne Pekowsley
JAVA Server pages by Nick Todd
HTML Black Book by Holzner
Java Database Programming with JDBC by Patel moss.
Software Engineering by Roger Pressman
? Flow Chart : User
? Flow Chart : Admin
Store and retrievals
Like – increment rank to corresponding product
Data Flow Diagram
The class diagram is the main building block of object oriented modeling. It is used both for general conceptual modeling of the systematic of the application, and for detailed modeling translating the models into programming code. Class diagrams can also be used for modeling. The classes in a class diagram represent both the main objects, interactions in the application and the classes to be programmed.
In the diagram, classes are represented with boxes which contain three parts
• The upper part holds the name of the class
• The middle part contains the attributes of the class
• The bottom part gives the methods or operations the class can take or undertake
In the design of a system, a number of classes are identified and grouped together in a class diagram which helps to determine the static relations between those objects. With detailed modeling, the classes of the conceptual design are often split into a number of subclasses.
Add Category :
View Authorized users:
View User Query Keyword:
All Friend Request And Response:
View Product Rank Results:
View All Products With Rating:
Product Consumed By Users:
Recommended Products By Collaboration Filtering :
View Friend Request:
View User Friends:
View Post recommendations:
Friends Consumed Products:
Search Product And Recommend:
The following are the Testing Methodologies:
o Unit Testing.
o Integration Testing.
o User Acceptance Testing.
o Output Testing.
o Validation Testing.
Unit testing focuses verification effort on the smallest unit of Software design that is the module. Unit testing exercises specific paths in a module’s control structure to
ensure complete coverage and maximum error detection. This test focuses on each module individually, ensuring that it functions properly as a unit. Hence, the naming is Unit Testing.
During this testing, each module is tested individually and the module interfaces are verified for the consistency with design specification. All important processing path are tested for the expected results. All error handling paths are also tested.
Integration testing addresses the issues associated with the dual problems of verification and program construction. After the software has been integrated a set of high order tests are conducted. The main objective in this testing process is to take unit tested modules and builds a program structure that has been dictated by design.
The following are the types of Integration Testing:
1. Top Down Integration
This method is an incremental approach to the construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main program module. The module subordinates to the main program module are incorporated into the structure in either a depth first or breadth first manner.
In this method, the software is tested from main module and individual stubs are replaced when the test proceeds downwards.
2. Bottom-up Integration
This method begins the construction and testing with the modules at the lowest level in the program structure. Since the modules are integrated from the bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated. The bottom up integration strategy may be implemented with the following steps:
? The low-level modules are combined into clusters into clusters that perform a specific Software sub-function.
? A driver (i.e.) the control program for testing is written to coordinate test case input and output.
? The cluster is tested.
? Drivers are removed and clusters are combined moving upward in the program structure
The bottom up approaches tests each module individually and then each module is module is integrated with a main module and tested for functionality.
7.1.3 User Acceptance Testing
User Acceptance of a system is the key factor for the success of any system. The system under consideration is tested for user acceptance by constantly keeping in touch with the prospective system users at the time of developing and making changes wherever required. The system developed provides a friendly user interface that can easily be understood even by a person who is new to the system.
7.1.4 Output Testing
After performing the validation testing, the next step is output testing of the proposed system, since no system could be useful if it does not produce the required output in the specified format. Asking the users about the format required by them tests the outputs generated or displayed by the system under consideration. Hence the output format is considered in 2 ways – one is on screen and another in printed format.
7.1.5 Validation Checking
Validation checks are performed on the following fields.
The text field can contain only the number of characters lesser than or equal to its size. The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect entry always flashes and error message.
The numeric field can contain only numbers from 0 to 9. An entry of any character flashes an error messages. The individual modules are checked for accuracy and what it has to perform. Each module is subjected to test run along with sample data. The individually tested modules are integrated into a single system. Testing involves executing the real data information is used in the program the existence of any program defect is inferred from the output. The testing should be planned so that all the requirements are individually tested.
A successful test is one that gives out the defects for the inappropriate data and produces and output revealing the errors in the system.
Preparation of Test Data
Taking various kinds of test data does the above testing. Preparation of test data plays a vital role in the system testing. After preparing the test data the system under study is tested using that test data. While testing the system by using test data errors are again uncovered and corrected by using above testing steps and corrections are also noted for future use.
Using Live Test Data:
Live test data are those that are actually extracted from organization files. System is partially constructed, programmers or analysts often ask users to key in a set of data from their normal activities. It is difficult to obtain live data in sufficient amounts to conduct extensive testing. And, although it is realistic data that will show how the system will perform for the typical processing requirement, assuming that the live data entered are in fact typical, such data generally will not test all combinations or formats that can enter the system. This bias toward typical values then does not provide a true systems test and in fact ignores the cases most likely to cause system failure.
Using Artificial Test Data:
Artificial test data are created solely for test purposes, since they can be generated to test all combinations of formats and values. The most effective test programs use artificial test data generated by persons other than those who wrote the programs. Often, an independent team of testers formulates a testing plan, using the systems specifications.
The package “Virtual Private Network” has satisfied all the requirements specified as per software requirement specification and was accepted.
7.2 USER TRAINING
Whenever a new system is developed, user training is required to educate them about the working of the system so that it can be put to efficient use by those for whom the system has been primarily designed. For this purpose the normal working of the project was demonstrated to the prospective users. Its working is easily understandable and since the expected users are people who have good knowledge of computers, the use of this system is very easy.
This covers a wide range of activities including correcting code and design errors. To reduce the need for maintenance in the long run, we have more accurately defined the user’s requirements during the process of system development. Depending on the requirements, this system has been developed to satisfy the needs to the largest possible extent. With development in technology, it may be possible to add many more features based on the requirements in future. The coding and designing is simple and easy to understand which will make maintenance easier.
A strategy for system testing integrates system test cases and design techniques into a well planned series of steps that results in the successful construction of software. The testing strategy must co-operate test planning, test case design, test execution, and the resultant data collection and evaluation .A strategy for software testing must accommodate low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high level tests that validate major system functions against user requirements.
Software testing is a critical element of software quality assurance and represents the ultimate review of specification design and coding. Testing represents an interesting anomaly for the software. Thus, a series of testing are performed for the proposed system before the system is ready for user acceptance testing.
Software once validated must be combined with other system elements (e.g. Hardware, people, database). System testing verifies that all the elements are proper and that overall system function performance is
achieved. Test to find discrepancies between the system and its original objective, current specification and system documentation.
In unit testing different are modules are tested against the specifications produced during the design for the modules. Unit testing is essential for verification of the code produced during the coding phase, and hence the goals to test the internal logic of the modules. Using the detailed design description as a guide, important Conrail paths are tested to uncover errors within the boundary of the modules. This testing is carried out during the programming stage itself. In this type of testing step, each module was found to be working satisfactorily as regards to the expected output from the module.
In Due Course, latest technology advancements will be taken into consideration. As part of technical build-up many components of the networking system will be generic in nature so that future projects can either use or interact with this. The future holds a lot to offer to the development and refinement of this project.
In this paper, we develop a novel Domain-sensitive Recommendation algorithm, which makes rating prediction assisted with the user-item subgroup analysis. DsRec is a unified formulation integrating a matrix factorization model for rating prediction and a bi-clustering model for domain detection. Additionally, information between these two components are exchanged through two regression regularization items, so that the domain information guides the exploration of the latent space. Systematic experiments conducted on three real-world datasets demonstrate the effectiveness of our methods. It is worth noting that our method is totally based on the user-item rating matrix. In the future, we will attempt to explore both user-item interaction information and some external information simultaneously for domain detection.