Data

Two-tier vs. Three-tier Systems {2}

by Jasmine C
The article I read about discussed the client/server system and how there is a choice between a two tier and three tier architecture system.  Some of the ways to choose between the two tiers is to take into consideration the scope and difficulty of a particular project, the number of users the system will be used by, the transaction needs of the system, etc.  In both of the tiers, they each have the components of presentation, processing, and data.  However, each tier separate these three components differently.  The article then goes on to discuss the differences between between the two tiers.  In the two tier architecture, the three components are “divided into two software entities: client application code and data base server” (Gallaugher).  According to Gallaugher, he client deals with the presentation aspect, both the client and the server handle processing, and the server stores and accesses the data.  In this tier also, there is high application development speeds and this type of architecture works well with homogeneous environments.  The downfalls about this tier is if you change an aspect of your business, then it’s mandatory that you change your client logic.  Also, system security is lacking because of the need for different passwords.  With a three-tier architecture, some of the boundaries placed by a two tier system are attempted to be overcome.  In this tier, presentation, processing, and data are separated into different tiers.  This tier also utilizes the remote procedure call, RPC,tool that calls from the client to the server.  This tool “provides greater overall system flexibility than the SQL calls made by clients in two tier architecture” (Gallaugher).  Another advantage of this tier is that because the three components have their own tier, they can be developed in parallel and resources can be allocated more freely.  Even though a three tier architecture requires more planning than a two tier, the advantages provided are well worth it. read more...

Using XQuery to Retrieve Data {3}

by Penny P
XML is widely used in the web because it could be used to store all types of data. XQuery has become a language used to perform XML queries as it has the ability process XML data. Before the queries can happen, an XQuery Implementation needs to be applied. The XQuery Implementation must meet the following requirements: 1) be based on the XQuery standard, 2) be capable of access web requests, and 3) be able to write XML documents as webpages. The result of XQuery Implementation creates XHTML documents. Before it could be used to perform web searches, two more steps are needed: 1) build an application environment in a web server so it could execute the XQuery scripts and 2) process and answer the requests that are being made. read more...

Basics of Database Design {1}

by Jamal A
The article I read talks about the basics of database design including normalization, relationships and cardinality. It is important to create an entity relationship diagram (ERD), but sometimes it is ignored or overlooked when developing an application. Developers can use an accurate data model as an important reference tool. Effective database design can help developers to develop applications that perform well from the beginning. The effective database design can help developers to reduce the overall time it takes to complete the project. According to the article, effective database designers should use the principles of normalization when designing a database. Normalization is a database design approach that includes four objectives, minimization of data redundancy, minimization of data restructuring, minimization of I/O by reduction of transition sizes, and enforcement of referential integrity. The concepts and techniques that is important when designing an effective database will be discussed later on in this summary. The article describes the physical corresponding item of an entity is a database table. Naming entities should be in singular form and in all caps. For example, “an entity that contains data about your company’s employees would be named EMPLOYEE”.  The second corresponding item of an attribute is a database column. In database column one must name their database attributes in singular form with either capital letters or in all lower case. “For example, some attribute names for your EMPLOYEE entity might be: EmployeeId (or employee_id) and BirthDate (or birthdate)”. The third corresponding item that is described in this article is a primary key. The primary keys uniquely identify a row in a table and relate records to additional data stored in other tables. The primary Key simply points between interrelated records in different tables. Next, different types of relationships are described in this article for example, “one-to-many, and “many-to-many relationships. These are some of the basics that are described in this article. read more...

SQL Server 2012, a push towards Big Data {Comments Off on SQL Server 2012, a push towards Big Data}

by Hassib K
I read a recent article by Alan R. Earls about the upcoming SQL Server 2012 and the cool new features.  The explosion of big data use throughout the organizations will be the driving force for updates in the new SQL Server.  Cloud computing also will have a tremendous impact on the future of SQL Server.  The new version is supposed to have a Virtual Launch on March 7 and it will include a new feature that allows connecting to Apache Hadoop which is an open-source distributed computing framework used for big data storage and processing.  Microsoft is pushing a self-service BI strategy where you don’t have to be the IT pro in order to use database tools.  This trend has brought about the introduction of Power View, which is SQL Server 2012’s data visualization tool.  This tool will be able to scan millions of rows of data from a variety of sources making analytics easier and more presentable.  PowerPivot will also be tweaked and updated for SQL Server 2012 which allows users to use key performance indicators in reports or chose department specific perspectives of data.  All of these updates and new features are meant to make SQL Server easier to use for non-IT professionals.  They want to make database access, report generation, and the like become something that the sales manager would be able to do and eliminate the need for the IT department to build reports. read more...

Merit Awarded given to ERwin Data Modeler {Comments Off on Merit Awarded given to ERwin Data Modeler}

by Ermie C
This article is about how a CA ERwin Data Modeler has been given the Reader’s Choice Merit Award from Visual Studio Magazine. They were chosen under the categories of Databases, Data Development and Modeling. In the Visual Studio Magazine there are 29 categories to get recognized in. With many readers, they have understood the “proven ability” of what ERwin data modeler can do from streamlining the data model and designing databases. This recognition is highly regarded because VSM’s readers are the most important to them and they understand that if their readers recognize something, then it will clearly deserve a reward. This Data Modeler is popular because it is used by many professionals and it gives them the ability to solve real world database problems. So this award is deserving to the one thing that makes everyone’s life easier. read more...

Important Steps to Convert The Data Sucessfully {1}

by Jamal A
The article I read talks about the procedure that can successfully convert a logical data model into a physical  data model, particularly in a warehouse setting. The article describes some of the necessary steps that can   help convert the data successfully. The article talks about the rules that are necessary to follow when creating logical and physical data models. For Instance: “The business authorization to proceed is received, business requirements are gathered and represented in a logical data model, which will completely represent the business data requirements and will be non-redundant, the logical model is transformed into a first-cut physical model by applying several simple modifications, such as splitting a large table or combining entities in a 1:1 relationship, the logical model is then transformed into a second-cut physical model by iteratively  applying three levels of optimizations or compromises. The outcome from this is a physical database design, apply safe compromises to the model, such as splitting a table or combining two tables, apply aggressive  compromises to the model, such as adding redundant data, the physical database design is then converted to  a physical structure by generating or writing the DDL and installing the database”. In This article, The Information  Gathering Task is also described. I think gathering information is the most critical task when developing a data model. The article described three ways to approach data model development that is, “Top down, inside out, and bottom up”. read more...

Data Mining and Predictive Analytics {1}

by Hassib K
The article talked about data and how it is currently being used to provide real time answers.  Businesses are using their available data to make decisions every day.  One example includes how credit bearau’s use your credit history to assign you a current credit score.  This score is for today and depending on the circumstances of your credit history, it could change tomorrow, or next week.  Aside from business, it’s also used in healthcare, where they try to predict the length of hospital stay for people, which helps them better manage the facility and patient health.  It goes on to outline the process involved, such as acquiring data, organizing, and using for predictive analysis.  A business’s success in the present and future depends heavily on their ability to predict and forecast using the data they have available. read more...

Extension of ER Model {2}

by David H
This article talks about how to manage data quality and context semantics by using the approach extension of the Entity Relationship Model. In context semantics there is a problem call “Inter-attribute Relationship”.  By using the approach of Entity Relationship Model, it can solve the problem. In addition, Dr. Steven and Dr. Richard Wang also talk about there was alternative way to solve “Inter-attribute Relationship” problem. They believed that by applying the existing Entity Relationship Model, it is necessary to extend the Entity Relationship Model for more efficient. The extension of ER Model name Attribute Relationship. Basically, what the Attribute Relationship does is gather more detail and organize the identity of the relationship to extend the attribute level. There are two categories for Attribute Relationship which is strong attribute (SA) and weak attribute (WA). The main purpose of using strong attribute and weak attribute is because it embeds dependence on each other and represent by identify relationship. read more...

Using ER Models for RFID {8}

by Penny P
In this article, the author talks about how RFIDs (Radio Frequency Identification) uses Entity-Relation models to manage its data. With the use of RFIDs, it allows objects to automatically collect information so it makes things easier and more efficient for those who use it. The data for the RFIDs changes frequently and it requires the use a data model so that it can perform certain functions such as tracking and monitoring information. The ER Models allow the RFIDs to collect and obtain data that they can use for tracking and identifying the history and real time information. There are several ways to use the ER Models for the RFIDs. One way is to model the data as events. In the article, it mentions that the RFIDs used by Sun Java System models the static entity types into three different tables: one for the entity type, the second for specific attributes of the entity, and the third for the parent/child relationships. The dynamic data is stored in a different log where they are always associated with timestamps. Another use of the ER Model is though DRER (Dynamic Relationship ER Model). This an extension of the regular ER Model that adds a dynamic relationship. The addition of dynamic relationships can generate events and generate state history. read more...

A Model for Models: Data Modeling Basics {Comments Off on A Model for Models: Data Modeling Basics}

by Brian T
Data modeling is a vital aspect of database creation and management. The ER diagrams lay the logical framework for an entire system, upon which so many people and other systems will be relying. Fundamentals, as with every other academia subject, are a complete necessity. Luckily, articles such as this one exist which aid in understanding. It outlines the basics of modeling which we have also touched on in class. It covers a variety of styles, patterns, and classifications of ER diagrams which may be used in the modeling process. read more...