Dbms Assignment
Dbms Assignment
Dbms Assignment
1. Find the id and title of all courses which do not require any prerequisites.
2. Find the names of students who have not taken any biology dept courses
3. Write SQL update queries to perform the following (queries 2 and 4 are pretty
meaningless, but still fun to write):
2. Increase the tot_creds of all students who have taken the course titled
"Genetics" by the number of credits associated with that course.
3. For all instructors who are advisors of at least 2 students, increase their
salary by 50000.
Assignment 8: SQL DDL and updates
1. Each offering of a course (i.e. a section) can have many Teaching assistants; each
teaching assistant is a student. Extend the existing schema(Add/Alter tables) to
accommodate this requirement.
2. According to the existing schema, one student can have only one advisor.
1. Alter the schema to allow a student to have multiple advisors and make sure that
you are able to insert multiple advisors for a student.
2. Write SQL queries on the modified schema. You will need to insert data to ensure
the query results are not empty.
2. Find all students who are co-advised by Prof. Srinivas and Prof. Ashok.
1. Delete all information in the database which is more than 10 years old. Add data
as necessary to verify your query.
2. Delete the course CS 101. Any course which has CS 101 as a prereq should
remove CS 101 from its prereq set. Create a cascade constraint to enforce the
above rule, and verify that it is working.
Assignment 9: Schema creation and constraints
1. Modify the trains schema which we saw earlier (available here), to create constraints to
check the following:
2. When a train is removed from service, all its halts should be deleted.
3. Write SQL Create table statements to create the following schema. Include all
appropriate primary and foreign key declarations. Choose appropriate types for each
attribute.
1. Stations
2. Tracks, connecting stations. You can assume for simplicity that only one track
exists between any two stations. All the tracks put together form a graph.
4. Train schedules recording what time a train passes through each station on its
route. You can assume for simplicity that each train reaches its destination on the
same day, and that every train runs every day. Also for simplicity, assume that for
each train, for each station on its route, you store (a) time in, (b) time out (same as
time in if it does not stop), and (c) a sequence number so the stations in the route of
a train can be ordered by sequence number.
ER Models can be drawn using any of several tools. Among these Dia is a very convenient open
source tool which runs on multiple platforms including Linux, Windows and MacOS.
Dia has a number of "sheets" each of which includes diagram objects for different modeling
tools, such as UML, ER diagrams, flowcharts, etc.
The ER tool has objects for entities, relationships, attributes (using the oval notation), edges,
and so on. The properties boxes for each of these elements allows you to specify cardinality
constraints, total participation, identifying relationship, etc.
To create the ER notation used in the Database System Concepts 6th Ed book, we use the
UML class objects instead of ER entity objects in Dia. Open the properties of the class
object and give it a name (entity set name), and add attributes. Select Visibility
Implementation to remove the + before the attribute name, and select Class Scope to
underline an attribute.
Diagrams drawn using Dia can be embedded in other documents by exporting to other
formats such as .eps (for Latex) or .jpg (for Word or Open/Libre Office), and including the
exported format in the document.
ASSIGNMENT 11
This assignment has several parts. In the first part we work on basic manipulation of functional
dependencies. Next, we use functional dependencies to normalize some toy relations into
BCNF/3NF. And finally, we take a real life example and figure out the functional dependencies
involved, and develop a normalized database design.
3. AB --> C, C --> D
Indexing
1. Use the script comments-ddl.sql provided to create a new table called comments.
2. Use the script comments-insert.sql provided to populate large data (100000 rows) into the
comments table (This may take about a minute or two to complete)NOTE: this is a large
(11MB) file, so we suggest coordinators download a copy and make it available locally
instead of downloading it hundreds of times from the IITB server. Some people found
pgadmin hanging if they used it to load such a large data file. To avoid this problem use
the following steps to insert the data:
4. You will see "INSERT 0 1" being printed for every insert. It will take some time
but will terminate successfully. You can hit enter once in a while to see if it is still
in progress.
NOTE: PostgreSQL may give error messages about not being administrator; these are for
tables other than the ones you created, so you can ignore these messages.
4. To record the time taken by a query, run it from pgAdmin3, and then select the Messages
tab in the Output pane at the bottom. The execution time will be shown here.
Now run the following queries and record the time taken; report the times in your
submission. (Goal: to show a query whose plan uses an index, and another that cannot
use any index and must do an expensive scan on the same relation, and show the
difference in run times. Both queries retrieve at most a single row (by using a selection on
primary key for the first query, and a selection on two columns, for the second query).
Later we will see the actual query plans.)
5. Find the query plan for each of the above queries, by prefixing the query with the explain
keyword. For example:
explain select * from comments where rating = 5 and item_id = 99982;
Submit the query plans as part of your submission.
o The estimated cost is shown as c1..c2, where c1 is the cost for the first tuple, and
c2 the cost for the last tuple in the result. Can you think of a reason why c1 may
be useful?
2 Now create an index on the unindexed attribute rating of the comments relation by
executing
create index comments_rating on comments(rating);
3 Rerun the preceding queries from step 4 and record the time as well as the query plan.
Report these in your submission.
4 Now similarly create an index on item_id attribute of the comments relation. Rerun the
preceding queries from step 3 and record the time as well as the query plan. Report these
in your submission. Compare the time taken for queries 4.1 and 4.2.
5 Find the plans for the following queries, and submit them as part of your assignment
submission.(First one uses a condition only on an indexed attribute, second one uses one
conjunct on an indexed attribute and one conjunct on an unindexed one)
1. Reload the university schema with a larger dataset. First drop the tables using this script,
and recreate it using the DDL. Then load the larger dataset available here. Again this file
is quite large (2.2 MB); coordinators please download a copy and make it available to
everyone instead of everyone downloading separately; follow steps similar to the earlier
one to upload this data to postgresql using psql, instead of pgAdmin3.
2. Run each of the following queries to find the time taken, and use the explain feature to
find the plan used for each of the queries. By studying the plans, explain why each query
either ran fast or ran slowly. Submit the time, execution plan and a brief explanation for
each query.
Transactions
1. In this exercise, you will see how to rollback or commit transactions. By default
Oraclecommits each SQL statement as soon as it is submitted. To prevent the transaction
from committing immediately, you have to issue a command begin; to tell Oracleto not
commit immediately. You can issue any number of SQL statements after this, and then
either commit; to commit the transaction, or rollback; to rollback the transaction. To see
the effect, execute the following commands one at a time
o begin ;
o rollback;
Note: please do not set transaction isolation level as serializable right now, you will do it in
later exercises.
In the read committed isolation level, each statement sees the effects of all preceding
transactions that have committed, but does not see the effect of concurrently running transactions
(i.e. updates that have not been committed yet). This low level of consistency can cause
problems
with transactions, and it is safer to use the serializable level if concurrent updates occur
with multiple statement transactions.
In snapshot isolation, where a transaction gets a conceptual snapshot of data at the time it started,
and all values it reads are as per this snapshot. In snapshot isolation, if two transactions
concurrently update the same data item, one of them will be rolled back. Snapshot isolation does
NOT guarantee serializability of transactions. For example, it is possible that transaction T1
reads A and performs an update B =A, while transaction T2 reads B and performs an update
A=B. In this case, there is no conflict on the update, since different tuples are updated by the two
transactions, but the execution may not be serializable: in any serial schedule, A and B will
become the same value, but with snapshot isolation, they may exchange values.
uses snapshot isolation for concurrency control when asked to set the isolation level to
serializable, even though it does not really guarantee serializability. Microsoft SQL Server
supports snapshot isolation, but uses two-phase locking for the serializable isolation level.
Oracleversions prior to 9.1 used snapshot isolation when the isolation level was set to
serializable.
However, since version 9.1, Oracleuses an improved version of snapshot isolation, called
serializable snapshot isolation, when asked to set the isolation level to serializable. This
mechanism in fact offers true serializability, unlike plain snapshot isolation.
2. In this exercise you will run transactions concurrently from two different pgAdmin3
windows, to see how updates by one transaction affect another.
o Open two pgAdmin3 connections to the same database. Execute the following
commands in sequence in the first window
1. begin ;
1. begin;
Look at the value of tot_cred. Can you figure out why you got the result
that you saw? What does this tell you about concurrency control in
PostgreSQL?
1. commit;
1. commit;
3 Now, let us try to update the same tuple concurrently from two windows. In one window
execute
1. begin;
1. begin;
See what happens at this point. The query appears to be hanging: Oracleis
waiting for the other query that updates student to complete.
Now in the first window, execute
commit;
and see what happens in the second window.
Then execute commit; in the second window and see what happens.
(b) Next do the same for the second query (both txns update).
4 Open two connections (two new query windows) and type the following:
1. Run the query: select id, salary from instructor where id in('22222', '15151') and note the results
2. Begin a transaction
4. Run this query in window 1: update instructor set salary = (select salary from
instructor where id = '22222') where id = '15151';
5. Run this query in window 2: update instructor set salary = (select salary from
instructor where id = '15151') where id = '22222';
6. commit window 1
7. commit window 2
8. What happened above? Check the state of the system by running the query
select id, salary from instructor where id in('22222', '15151')
9. (If you have access to a version of Oraclewhich is 9.0 or older, do the following
on that system: First execute steps 1 to 7 on that system. Then run the query:
select id, salary from instructor where id in('22222', '15151')
and compare the results. Is this equivalent to any serializable schedule?
ASSIGNMENT 14 : How to connect to MySQL database using PHP
Before you can get content out of your MySQL database, you must know how to establish a connection to M
inside a PHP script. To perform basic queries from within MySQL is very easy. This example will show you ho
and running.
The first thing to do is connect to the database.The function to connect to MySQL is called mysql_connect. Th
returns a resource which is a pointer to the database connection. It's also called a database handle, and we'll us
functions. Don't forget to replace your connection details.
<?php
$username = "your_name";
$password = "your_password";
$hostname = "localhost";
<?php
//select a database to work with
$selected = mysql_select_db("examples",$dbhandle)
or die("Could not select examples");
?>
Now that you're connected, let's try and run some queries. The function used to perform queries is named - mys
The function returns a resource that contains the results of the query, called the result set. To examine the result
to use the mysql_fetch_array function, which returns the results row by row. In the case of a query that do
results, the resource that the function returns is simply a value true or false.
A convenient way to access all the rows is with a while loop. Let's add the code to our script:
<?php
//execute the SQL query and return records
$result = mysql_query("SELECT id, model, year FROM cars");
//fetch tha data from the database
while ($row = mysql_fetch_array($result)) {
echo "ID:".$row{'id'}." Name:".$row{'model'}."
".$row{'year'}."<br>";
}
?>
Finally, we close the connection. Although this isn't strictly speaking necessary, PHP will automatically close the
when the script ends, you should get into the habit of closing what you open.
<?php
//close the connection
mysql_close($dbhandle);
?>
Here is a code in full:
<?php
$username = "your_name";
$password = "your_password";
$hostname = "localhost";