SQL

The Future of MySQL {Comments Off on The Future of MySQL}

by Miguel V
The article I chose to talk about this week was “eWEEK Labs Ponders MySQL’s Fate Under Oracle.” In this article the author Jeff Cogswell gives his ideas on the future of MySQL, by factoring in sound business sense as well as history, now that Oracle has bought it for $1 Billion. With 11 million installations, MySQL has proven to demonstrate just how valuable the software can be proving that it is not just some “small, run-of-the-mill Open-source Package.” MySQL has become a serious competitor to Oracle’s database products especially even more now “in the down economy, with cash-strapped businesses increasingly looking to free and open-source software.” The article then looks at the history of Oracle’s pasted acquisition deals, for example, “in early 2007, Oracle acquired Hyperion, a maker of business intelligence products.” Jeff noticed that Hyperion software was integrated into Oracle’s overall product set while remaining largely intact. In general Oracle’s “acquired companies’ products did not go away but were integrated into Oracle’s existing product lines.” He wraps up the article by saying that “It’s very possible that the name will change, but, based on previous Oracle acquisitions, it’s doubtful that MySQL will disappear.” read more...

Tips in using MySQL in AWS {Comments Off on Tips in using MySQL in AWS}

by Jim J
The next generation of storing data is in cloud computing. Amazon cloud offers these kinds of services few things can be done to overcome most of the performance issues. Amazon uses elastic book storage (EBS) which appropriates network resources to its users and which has performance that varies depending on availability; this is their greatest weakness. Ways to mitigate this is taking advantage of the cache in browsers, objects, queries, and data. Many MySQL settings can be tuned for performance benefits such as increasing buffer sizes and even things like using a high-performance file system like XFS. In addition to performance benefits, DBA’s need to ensure data redundancy taking use of multiple data centers spread across the globe so as to have options for when a server goes down; recovery is equally important where DBA’s need to automate getting their servers back up and running. Some companies like Netflix even run programs that randomly take out servers during the day so as to test out their ability to stay on-line. read more...

Vertical to Horizontal is that the question? {1}

by CyberChic
According to Chen and Ordonez in their article “Horizontal Aggregations in SQL to Prepare Data Sets for Data Mining Analysis, there has been difficulty in the past preparing database information for data mining. Data mining usually uses aggregate information to start the mining process. With SQL the data that is derived from the query in aggregate form is usually in a vertical view. They give detailed information in their paper explaining how to evaluate and optimize horizontal aggregations of data. Some of the benefits of doing this process in the database itself are to reduce manual work in data Right preparation and data security. read more...

Easing Into SQL {Comments Off on Easing Into SQL}

by Jonathan N
This Article talks about how difficult of a task it might be to master SQL. In the article they show us how to use the query design grid to create SQL statements and then how to run the statements in the immediate window to debug them. First the author, Susan Harkins speaks about building a SQL statement in the query design grid.  We as users are very likely to create errors and typos, so Susan recommends us to let Access do most of the work for us. Access produces an equivalent SQL  statement for every query. To begin, You first need to build a query like any normal query by clicking create query in the toolbar. You then need to drag fields from the field list to the grid, build relationships, add criteria. Once you have completed the queries you can now click the view button and choose SQL view and Access will display the query equivalent SQL statement. Then you can highlight the statement and copy it to a module.  Now Susan wrote about how we can Debug a SQL Statement in the Query Design Grid. Chances are statements won’t run correctly the first time after making changes from  the design grid to a module. There is a short list available in the link below that will help pinpoint the problem. After completing the tasks, you can run the statement by clicking the Run button on the toolbar. read more...

Will Microsoft’s New SQL Server Be a Big Hit? {1}

by Renee L
In his article “Microsoft’s SQL Server 2012 Has Muscle, but Database Battle is Fierce,” Mark Fontecchio examines the new features of Microsoft’s new version of the SQL Server and what it has to offer. Some of the great features added to the SQL Server 2012 are the AlwaysOn, which allows a set of databases to fail over as a single unit, xVelocity, a feature that speeds up querying, and Power View and PowerPivot, which provides self-service business intelligence. However, Fontecchio argues that, even though Microsoft’s SQL Server 2012 is probably one of the best releases Microsoft has made, it may not be enough to overcome leading database systems, such as Oracle’s database and IBM’s DB2. Experts say that there are limitations to the new SQL Server. Such limitations are that it must be run on x86 servers with a Windows operating system and when using xVelocity, indexes cannot be updated, therefore, you must rebuild them whenever you want to add more data. In addition, companies do not just change database systems no matter how good the price is. It is hard to change databases and companies will not do so unless the system is really worth it. All in all, Microsoft customers will be happy with the new SQL Server and its added features, however, it may not make a big impact on enterprises. read more...

Mining Frequent Pattern From Spatial Databases {Comments Off on Mining Frequent Pattern From Spatial Databases}

by Ronny W
Every database have some sort of pattern. Some part of the tables and relationship is requested more than the others. The frequency usage of request can help people understand more about the usage of databases. The traditional method of mining frequent pattern have always been the FP algorithm. The authors in the conference proposes using FPAR/FRAR (Frequent Positive Association Rule/Frequent Negative Association Rule) algorithm, which is an improvement of the FP growth algorithm. The other proposed method is “an enhancement of the improved algorithm by a numerical method based on SQL for generating frequent patterns known as Transaction Frequent Pattern (TFP) Tree is proposed to reduces the storage space of the spatial dataset and overcomes some limitations of the previous method.”(Tripathy, 2012)They went over the steps of doing the different methods. Association rule mining and using SQL to implement is a logical choice for mining frequent pattern in spatial databases.
We have been overwhelmed with SQL information in class, but here are more SQL information. The frequency function was discussed in class to inform IT professional how to petition the servers for quicker searches. With this knowledge it can help to improve the time it takes to retrieve data. They can also help to recognize what are being use the most which should be on many a solid state drive, and the less retrieved data be on regular hard drive to improve database query time.
This knowledge of data mining and efficiency really makes a big difference when it comes to big databases. A small company database would probably not have much problem no matter which method they use. As the company grows, they need better method of querying and data retrieving methods. This makes a big difference when it is a really large database like Google search engine. It is always good to try to improve performance for databases in different ways. read more...

High Capacity Databases {2}

by Anthony T
The article is about Fusion-io and the capabilities it has been able to achieve through integration of a 64 core server. The company boasts over one million transactions per second which was made possibly by a Microsoft SQL Server database. The article mentions the importance of this milestone in regards to the shopping season. It also mentions details to how this immense number of transactions was achieved. The test they ran consisted of 150 billion rows which were placed on one table in Microsoft SQL Server. The outcome was over one million singleton inserts per second. The article went on to talk a little about the actual company and gave a very brief overview. Fusion-io focuses on data decentralization which significantly increases the processing capabilities of data centers. Data decentralization is basically the process of changing the location of “active” data from a central location to the server. read more...

PASS is at it again! {Comments Off on PASS is at it again!}

by Evin C
In the beginning of the year I had presented an article about free SQL training via the organization PASS (Professional Association for SQL Server). Now I am bringing you more information on the organization and training they have provided for  the SQL community! PASS has partnered up with DELL and MaximumASP to bring multiple virtual lab environments for the purpose of training in Microsoft SQL Server 2008 R2, Sharepoint 2010 and Office 2010 right from the users desktop! These virtual labs have been provided through a website they release when there training occurs (in this case it was PreviewSQLServer.com). These training sessions were brought to the SQL community in 2010, sadly no longer available for today’s users. BUT! Through the consistent trend of PASS and it’s partnering organizations, I believe it would be safe to say to be on the lookout for possible future trainings in SQL Server with the new release of Microsoft SQL Server 2012 on it’s way and all the new exciting features they are going to need to address. read more...

Coming Soon to a Business Near You: Microsoft SQL Server 2012! {4}

by Jongwoo Y

“The biggest improvement is that it’s been infused with a set of new features designed to handle larger workloads — whether on premises or in the cloud.”(Preimesberger, 2012)

This is definitely exciting news for the IT sector as the database programs are constantly evolving with the release of Microsoft SQL Server 2012. This new release will be able to help businesses become more efficient in their practices with better technology and more exciting features. Businesses will be able to utilize these new features that will help with their cloud technology and the ability to better handle “big data”. New features of SQL Server 2012 include new storage options that will help with the analysis of large workloads (Preimesberger, 2012). For example, the new version includes an in-memory column-oriented database to improve analytics performance (Preimesberger, 2012). Microsoft actually ended up reaching out to Hortonworks, a company that created “Hadoop”, a popular cloud environment program, in order to implement an effective big data analytics feature. This will definitely help with the sales of the application, as big businesses are always in desperate need to find better ways to analyze their big data. This new Hortonworks service will enable consumers to gain relevant insights from complex data stores hosted in cloud servers by being able to merge data from SQL Server 2012, Excel, and other office applications into each other (Preimesberger, 2012). April 1st cannot come soon enough for many companies and their IT departments as database applications are becoming more efficient and filling the gaps that have been created with the rising popularity of cloud computing. read more...

The Art of SQL Injection {3}

by Ermie C
In this peer reviewed article, it explains the many ways  on how SQL databases can be infiltrated with certain techniques.  There are techniques such as changing the attributes of the entities in the database.  They explain how sql injections can be implemented in any web application and any application that is connected with a database.  It’s really impressive that they have shown a technique on how to detect SQL injections.  This all deals with creating backups for the original database queries and attributes.  The the solution to seeing this is that they have created a SQL algorithm that compares the dynamic(present) database and compares it to the static database(past).  Then when it compares, it will essentially, detect the differences in queries and attributes.  However, just like every double edged sword, SQL injections are possible for any web application with a database, these techniques to detect and prevent SQL injections can be also be implemented. read more...