The SQL pillars: DDL, DML, DCL, TCL, and why

database_hierarchy

Considering the hierarchy bellow, SQL statements are classified in 4 pillars:

  • DDL (Data Definition Language)

Comprising instructions that work in the hierarchy from table upwards:

create, alter, drop, rename…

  • DML (Data Manipulation Language)

Working in fields and records with:

select, insert, update, delete, explain…

  • DCL (Data Control Language)

For administrative purposes, working with users and permissions with:

grant, revoke…

  • TCL (Transaction Control Language)

With magic power to undo or save changes:

commit, rollback, savepoint, set transaction…

NOTE: There are some differences from one to another Database engine, for example Oracle’s hierarchy encloses groups of tables in schemas; SQL statements also differ a little because of this, but no matter what, SQL language defined in the World Wide Web Consortium (W3C) reference have never failed with me.

WHY THE CLASSIFICATION? TCL usage

If you are like me, I’d always asked my self why such a complexity of knowing which instruction belongs to which group. Well, answer is that interesting stuff happens when theory faces the realm. Let’s first talk about TCL…

When you

Transaction Control Language comprises magic powers,

New Improved Version of Raspberry Pi released

A new model of the famous credit-card sized computer has been released. This new model is called Raspberry Pi B+, it is already available for orders. Here you’ll see its most important new features.

The computer is designed and manufactured in the UK by the non-profit foundation named Raspberry Pi Foundation. This new redesigned of this credit-card size computer is meant to promote de development of bigger projects on it. That’s why this new model (called B+) contains extra connector and sensors.

rsz_b-

The Model B+ uses the same BCM2835 application processor (700MHz) as the Model B. It has 512 MB RAM.

  • More GPIO. The GPIO header has grown to 40 pins, while retaining the same pinout for the first 26 pins as the Model B.
  • More USB. It has 4 USB 2.0 ports, compared to 2 on the Model B, and better hotplug and overcurrent behaviour.
  • Micro SD. The old friction-fit SD card socket has been replaced with a much nicer push-push micro SD version.
  • Better audio. The audio circuit incorporates a dedicated low-noise power supply.

‘We’ve been blown away by the projects that have been made possible through the original B boards and, with its new features, the B+ has massive potential to push the boundaries and drive further innovation.’ Said by Eben Upton, CEO of Raspberry Pi Trading.

mechanicalspecB+

It is powered by micro USB with AV connections through either HDMI or a new four-pole connector replacing the existing analogue audio and composite video ports. The SD card slot has been replaced with a micro-SD. The B+ board also now uses less power (600mA) than the Model B Board (750mA) when running.

It is still the same price as the B model.

You can get your own Raspberry Pi B+ ordering your own on:

 

Images thanks to:

 

Maven, Apache POMs to standard project building

Maven, a Yiddish word meaning accumulator of knowledge, is an Apache project that standard the way we build projects. It mainly works with Java, but can also be used with other languages. Maven faces traditional compilation tools, think about it as an improved Ant. This article aims to show you what, why and how.

apache maven

Maven works on the top of a XML file named POM (Project Object Model) that describes its dependencies on external modules and components, the build order, directories and required plug-ins. It dynamically downloads the required external dependecies at the moment they are specified from a central repository though you can declare different sources. Follow me in a practical example to see how it works!

INSTALLATION

There are 2 ways: you can work directly from a development tool with maven integration like Eclipse IDE for java EE or try on the command prompt way.  Since the 1st way depends on each proprietary dynamics, we will use the “universal” commando prompt.

STEP 1. To guarantee a successful instalation we must be sure we have java jdk, so go and type:

java -version

If you find a problem here, Windows users can find help in this video and Linux/Unix users here.

STEP 2.

Windows Users: download the program from it’s official source here. Unzip, move unziped folder to Program Files and open bin/mvn.bat If you have problems, it may be that you have java JRE and not JDK, so try to reinstall it properly with step 1.

Linux and Unix: run the command

sudo apt-get install maven2

 

HELLO WORLD

Now that everybody have installed maven, we can successuly type the command:

mvn -version

Using the command line, go into the folder you will to be your workspace. Noobs in Command Line: watch this video. Right in your workspace, let’s initializate a maven project:

mvn archetype:create   -DgroupId=com.dps.maven2example -DartifactId=maven2example_logic

cd maven2example_logic

mvn test

mvn package

java -cp target/maven2example_logic-1.0-SNAPSHOT.jar com.dps.maven2example.App

If success, you will get a “Hello World!” message in return. If you are inside a private network, you may fail from the first command. Solution for that is to configure your proxy at the maven configuration: please look for your maven installation folder and open the file conf/settings.xml Right there is an space for proxy configuration, uncomment it and fill the form.

ADDING NEW DEPPENDENCIES

Open your pom.xml file This is my view:

maven

maven

I strongly recommend you to use a good text editor to visualize your files, if you fell curious, I’m using Atom text editor. Atom is free and multi-platform.

See we have an dependency into our file. You may all do have noticed during the execution of your commands, some files were downloaded: this is what I refereed to when I told you maven dynamically downloads the required external dependencies at the moment they are specified. Let’s add another dependency, a spring library, and see what happens.

 STEP 1. Go to the main repository at http://mvnrepository.com/ (save this link) and search “spring”

mvn spring

mvn spring

 

From the list of results, select the one that has the spring library. There, you will find the:

  1. groupId
  2. artifactId
  3. version

STEP 2. Okay, let’s add it to our POM, run “mvn package” and see how the whole thing gets downloaded. Now we can use the librarie’s methods.

<dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring</artifactId>
      <version>2.5.6.SEC03</version>
</dependency>

MOVING TO ECLIPSE

Finally, we all know it is easier to have a IDE that reads our project’s dependencies and auto-complete our code in real time. In my case, I like to use Eclipse IDE. When you are running the Eclipse (or other IDE, I don’t know), you may import you project in File > Import… > Maven > Existing Maven Projects > Browse and choose the root directory of your project (in this case maven2example_logic).

Your project will charge then and everything down “src” folder will show up as java libraries. Note you have a division named “Maven Dependencies” with all your libraries charged inside, and not also now you are able to use such importet content.

maven in eclipse

maven in eclipse

 

We are done for today. We had a very practical tour in maven’s world. Keep following ProgSchedule because it’s getting better!

RESOURCES

Maven’s Apache site – What is Maven?

IBM – 5 cosas que usted no sabía acerca de… Apache Maven

Wikipedia – Apache Maven

 

Analytical Engine: WORLD’S FIRST COMPUTER.

This post is historically important although technically irrelevant. It is about a mechanical machine called “Analytical engine” which is (perhaps) the very first functional general-purpose computer known in the world. It was first designed (but not necessarily developed) on the first half of the 19th century by an English mathematician and engineer called Charles Babbage (December 1791 – October 1871).

The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician Charles Babbage.

It was first described in 1837 as successor to Babbage’s Difference engine. It was a design for a mechanical computer, thus the world’s very first computer. Although it was never completed (until some decades ago for historical reasons).

AnalyticalMachine_Babbage_London

This (huge) engine incorporated an arithmetic logic unit, control flow (conditional branching and loops) and integrated memory, making it the first Turing-complete general-purpose computer from which we have record.

Sadly, Babbage was never able to complete his inventions because disagreements with his chief engineer and insufficient funding. Computers had to wait until 1940’s that the firs general-computer were actually build (a complete century after).

It is said Babbage based his inventions on the Jacquard loom (which was a mechanical loom which was design to simplify the process of manufacturing complex patterns on textiles) invented by Joseph Marie Jacquard in 1801.

 

The input method to this engine was (could be) via punched cards, same as in Jacquard’s loom. For output, this machine would hava a printer, a curve plotter and a bell. This engines would also be able to punch numbers onto cards. There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit.

PunchedCardsAnalyticalEngine

An arithmetical unit (the “mill”) would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. The programming language to be employed by users was akin to modern day assembly languages.
In 1842, the Italian mathematician Luigi Menabrea, whom Babbage had met while travelling in Italy, wrote a description of the engine in French. In 1843, the description was translated into English and extensively annotated by Ada Byron, Countess of Lovelace, who had become interested in the engine eight years earlier.
In recognition of her additions to Menabrea paper, which included a way to calculate Bernoulli numbers using the machine, she has been described as the first computer programmer. The modern computer programming language Ada is named in her honor.

It is necessary to remember that ENIAC (The world’s first electronic general-purpose computer) was finished and announced in 1946, this is, 109 years after Babbage’s analytical engine was designed. And this is why Babbage’s engine was the big deal, but never finished :(

 

Images thanks to:

  • Karoly Lorentey: Punched cards.
  • Bruno Barral: Analytical engine.

Semantic Web: Opinion Mining and Sentiment Analysis (Part 2)

Good afternoon here from Mexico, this post is the second part of a series I’m glad to expose since is a highly trending topic in today’s Computer Science World. In our first article, we had an introduction to semantics and how it requires language to separated in different fields of study. We talked about come applications for today’s world, and do not miss the brief review of the Web Layers. Today we will dive into more philosophical and mathematical issues, just try not to fall in existencial voids!

Coded-Information
The definition of information is derived from stadistical considerations. A message informing us of an event that has probability p conveys:

amout of information = – 2 log p = bits

Figure out you are programming a search engine, like Google and somebody inputs a very common word in you textfield (the word “the” for instance). The probability of this very common word to appear in a particular document is surrounding the 100%, so if you apply the formula you find out you have no relevant information about this word.

In the other hand, somebody is looking for a very rare word “homosapiens”; there will be a more selective and small set of documents with relevant information surrounding this word. So we apply the formula with a very low value in “p” and we get a pretty high estimation about the information we can gather.

This probability grows as it aproaches -but doesn’t arrive- to zero.

Once we know how many information we dispose, we need to know where can this information be found: so we decide to put a score to each document we have and order them starting with the highest value. This score comes out from the product of our previous formula times the frequency of this word inside the document we are analyzing.

Is meaning/semantics just statistics?

In therms of Information Theory, language is a channel through which we try to convey a meaning, using spoken or written words. Since our language is highly redundant, this words just express an approximation of our meaning.

Context, history and experience allow us to predict a word missing in a common sentence. We need not much words to explain a simple/common subject than for harder/stranger ones. As Dr. Gautam quotes, language tries to maintain “uniform information density”.

Similar documents have similar key-words. Similar words are key of similar documents.

Techiniques that exploid this relation of language and statistics try to uncover the Latent Semantics. They try to discover the topics a collection of documents are talking about, to figure out which objects are similar, which objects represent the same kind of activity and a collection of other deductions that we intuitively do in our everday lifes.

Remember when in kinder garden you were given a set of words from which you had to select wich word did not bellong to the group. The question of the day: how would you make a computer arrive to this deductions?

Have a nice week, see you soon at ProgSchedule ;)

Semantic Web: Opinion Mining and Sentiment Analysis (Part 1)

Welcome everybody to ProgSchedule, today we will talk about a very cutting-edge topic, the Semantic Web, and one of it’s slopes: Opinion Mining and Sentiment Analysis. Let me introduce you first into a recap of our post The Web Layers: from Design to Programming to Architecture – there is explained how the Web naturally evolved from just displaying and linking content (it was the Web 1.0, the Static Web), to interacting with the client/users througth Javascript, Cookies and Server-Side magic languages (the Web 2.0, the Dynamic Web)… to the super structured and complex Web 3.0, better known as the Semantic Web. This phase of the Internet has the intention of not just documenting facts, but of building a base of knowledge, of comprehended content able to produce higher thoughts, to output valuable information. It’s a revolution and it’s happenning now, wanna be part of it?

opinion mining

Open your favorite web navigator, go to google’s portal and type in something, an icognita. A series of results will come out then, all of them ordered by a convention evaluating their relevance and popularity. Now press the “pause” button. Maybe you didn’t put the same exact words you were really looking for, but somehow google got to the right anwser. How does google knew what I wanted?

Our language is minimally composed of words with Spelling/Orthographic rules, when we study at the level of words we are studying Lexical matters. Thereafter, we have sentences built of words, all of them governed by Sythax rules. On top of all, we study the meaning of things through Semantics.

Computer’s language and math notation are examples of Formal/Regular languages, they do not differ from what they are designed to express, in counterpart to our’s, the Natural Language, which tend to have double meanings and lots, tons, of exceptions to their rules.Google’s magic comes to the front when a computer gets to understand a human, when the natural (the savage…) is domained by it’s rigorous scientific rival.

Having all this background of studies chasing the meaning of things, let’s focus the matter oif this title:

Opinion mining is a type of natural language processing for tracking the mood of the public about a particular product [REF].

What people thinks is a subjet that’s gaining more and more relevance everyday. What would you do with the opinion of people? Opinion Mining and Sentiment Analysis, by Bo Pang and Lillian Lee, has good examples:

Consider, for instance, the following scenario (…) A major computer manufacturer, disappointed with unexpectedly low sales, finds itself confronted with the question: “Why aren’t consumers buying our laptop?” While concrete data such as the laptop’s weight or the price of a competitor’s model are obviously relevant, answering this question requires focusing more on people’s personal views of such objective characteristics. Moreover, subjective judgments regarding intangible qualities — e.g., “the design is tacky” or “customer service was condescending” — or even misperceptions — e.g., “updated device drivers are not available” when such device drivers do in fact exist — must be taken into account as well.

There is a lot more in this field, I hoped you enjoyed you visit at ProgSchedule. See you in the second part of this series!

 

Is there a bridge among Java and C/C++ in the Oven?

Java, one of the world’s most used high-level languages will be able to be merged up with other major languages and paradigms (as C, the world’s most used language for programming embedded circuits) soon. All of this might be possible with the development of a new library in the OpenJDK API, called Project Panama.

Project Panama is earning a lot of traction since the original project’s proposer, the Oracle engineer John Rose submitted this initiative to the OpenJDK community. The main goal for this is to develop a specific library that both Java and non-java programmers found useful and easy-to-use.

The true reason behind this proposal is developing an open-source Java mailing list. One  that let Java developers to use non Java programming interfaces, including popular ones, such as those used on C and C++.

Golden Gate Bridge

“[in an] effort to let Java programmers access non-Java APIs. Including many interfaces used by C programmers. (Written in C or similar languages) to Java Developers without requiring them to write anything but plain java code.”

Said Charles Nutter, who co-led the development of JRuby language (a version of Ruby that runs atop the Java Virtual Machine), and who has emerged as an early and preponderant champion of the proposal.

Of course you have to input all the main data normally required in Java to use this library (as the function you are going to use, its parameters and return value of course). It is gonna be transparent for the programmer and there shouldn’t be any problems by using the new lib.

The real problem about this, is making this new API flexible enough to handle libraries of different sorts and languages as C, C++, Windows (DLLs), Linux and Solaris without making it difficult to use.

“Project Panama provides an alternative to JNI (Java Native Interface) for interfacing Java code to code written in unmanaged languages.

The proposal, he said, describes technical mechanisms and conventions required for this to work”

Said Forrester analyst John Rymer.

JNI serves as a Java intermediary that other languages can use to expose an API to Java code. But still, it is not the same.

 

Main source:

Web Recipes: HTTP Cookie?

I guess they were hungry when they named it “cookie”, I wonder if I had the power to stablish the first Protocols for the WWW I would say: let’s put some chocochips to it. But seriously, cookies play a substantial role in the Web, we owe them the capability to keep state information between a Client and a Server: without them, we wouldn’t login (start sessions) in our favourite web site, which is one of their most common uses and that just mean a lot.

BRIEF HISTORY

Once upon a time, there was a packet for programs named “magic cookie”. This packet had the capability to send and receive information with no change; it was the programmer Lou Montulli’s initiative to use them at the e-commerce application he was developing back then, for MCI, at Netscape.

The idea was to not retain partial transaction states in the server side, but to let this task to the client using the application. The result was a reliable implementation of a shopping cart, a first cookie specification and the Version 0.9 of Mosaic Netscape, the first browser able to support cookies. This all happened during the 1994’s.

In particular, cookies were accepted by default, and users were not notified of the presence of cookies. The general public learned about them after the Financial Times published an article about them on February 12, 1996. In the same year, cookies received a lot of media attention, especially because of potential privacy implications. Cookies were discussed in two U.S. Federal Trade Commission hearings in 1996 and 1997. In February 1996, the working group identified third-party cookies as a considerable privacy threat. The specification produced by the group was eventually published as RFC 2109 in February 1997. It specifies that third-party cookies were either not allowed at all, or at least not enabled by default. At this time, advertising companies were already using third-party cookies [...] A definitive specification for cookies as used in the real world was published as RFC 6265 in April 2011 [WIKIPEDIA SOURCE]

NUTRITION FACTS

A cookie is a small text file, with an ID tag, that is stored in the browser of our client -plain text. We, from the server side, give instructions to store this information.

  • Cookies come and go as implicit information in HTTP, we can use them as many times as we need to.
  • Cookies can be created, consulted, modified and destroyed.
  • Servers have limits on the number and size of cookies that they can store. Cookies are limited to a 4K size each. More info here. Test here.
  • The information in the Set-Cookie and Cookie headers is unprotected: information is exposed to anyone and it is your duty to encrypt such information or do whatever is needed to keep it safe.
  • Cookies can be given an expiration date or a maximum age.
  • Cookies are classified in two types: Session (they are destroyed when the client closes the browser) and Persistent (they remain in your computer until you delete them or they expire).

PRACTICAL SCENARIOS

For instance, if we have a successful login -user and password are correct in the client’s login form-, we can save in a cookie whatever information we need in order to identify the individual user that has this cookie.

This context might be used to create, for example, a “shopping cart”, in which user selections can be aggregated before purchase, or a magazine browsing system, in which a user’s previous reading affects which offerings are presented [W3C REF].

cookie

HTTP cookie

What would you use cookies for? In 5 Web-Concept trends that could take you by surprise we have the topic #5 RESTful vs SOAP, which are the two counterparts of an application which takes advantage or not of state retention. The decision of using the one or other, leads to the robustness of your application.

Let’s put some chantilly to this. See you soon in ProgSchedule!

Continue reading

The Web Layers: from Design to Programming to Architecture

Good morning people, today we are launching the week at ProgSchedule with this topic: the Web Layers. More than devising the difference between concepts, it is important to know the role and little of history about each of this components. The design, the programming and the architecture conventions complement the one another and make possible to have a nice experience in the Web.

THE CLIENT-SERVER MODEL

To get started, we should understand the basic structure of a network: the client-server model. This model describes the relationship between two computer programs, in which the one known as the client makes a request and the other, known as the server, receives and answers to it.

This is analogical to the communication scheme that separates the actors involved in the process of transferring a message into a sender and a receiver. Imagine all the process as a chat between a tribe of small computers turning to the wise server.

communication scheme

communication scheme vs client-server model

THE WEB 1.0, HTTP, HTML… and web Design

Fine. So this small clients write small HTTP requests and the wise server answers with a professional Hyper-Text Markup Language (HTML) letter, with words wrapped into a layout with margins, a touch of colour and some info-graphics.

This is an approximation of how the Web 1.0 was made of: content and a bit of design. In fact, Web Design was born here.

THE WEB 2.0, client and server side Programming + CSS

Soon after the wise servers of the baby Web 1.0 got overloaded of work, small clients were too nosy and required a bunch of attention each; the mailman was getting tired of such a come and go, and forget about that endless comment of “the connection is too slow”!

So grandpa said: okay okay, I will prescript some instructions here, so all this bunch of cyclic processes can be executed by them selves and I can get to concentrate in other things. I will include some attachments to your pretty HTML, so you can play awhile with no need to come and see me.

The Web 2.0 flourished, Server-Side programming languages came to the front to automate grandpa’s work: the most popular among them are now-a-days PHP, Python, Ruby, Perl, Java and Scala. Javascript became the sole star, the one Client-Side programming language that every small computer could understand and execute.

Web Design evolved with this together and CSS was one it’s most remarkable progresses by officially separating the Content from the Style. So we write HTML somewhere and specify the header’s colour, the body’s font-style, the picture’s margin and so on in a separate recipe.

THE WEB 3.0, Dabases, Higher level concepts and… Web Architecture.

Grandpa was still having difficulties -since when do I have so many grandchildren?- he had torrents of papers lost everywhere, he had to fabricate a library (a Database, tadaaa!) to put an order to all this.

Web Architecture was naturally born with Web 3.0. It is all the logistics that have to be taken in order to make all this divisions work: the Design module with it’s branches of content and style, the Programming separated also in two parts (one for the client, other for the server)  and a new place called the Model of data, which is usually separated into the Database and a space that works as an “index” for this DB.

Some higher-level concepts had a boom here, heading semantics and artificial intelligence, but I will not talk about them in this moment.

New tools known as API/frameworks came to the rescue, each of this specialized in the executing of one Server-Side Language. There are APIs for PHP, Python, Ruby, Perl, Java, Scala… This APIs stablish a place for each of this components we’ve talked about and delegate them selves to the task of making it all work.

We have seen a great variety of topics in this post in order to understand were is Design, Programming and Architecture. As you might have seen, it’s all about evolution. See you soon in ProgSchedule.

 

Intro to JavaScript – the step from Web Design to Web Programming

javascriptHello people, today I will continue writing a lot on this topics of Web Design, Programming and Architecture so we can all be pistols on this. Today’s menu: Javascript, the step from Web Design to Web Programming.

 

Since Javascript has come to the battlefield, the Internet has passed from a baby crawling (Web 1.0) to a playful child (Web 2.0). This language is so popular for the fact that we can start thinking in variables and algorithms, so our pages are not just an giant encyclopedia with links, but a friendly place able to manipulate it’s content. Javascript manipulates DOM -the HTML “Document Object Model”- so you can have a paragraph with an id and a button linked to a javascript action like this:

<p id="myMessage"> Hello! </p>
 <button type="button" onclick="sayByeBye()">Click Me!</button>
<script>
 function sayByeBye() {
 document.getElementById("myMessage").innerHTML = "Bye Bye!";
 }
 </script>

So when we click it, it goes and change our message without the need of refreshing! And that’s not all, DOM manipulation includes modifying, deleting, creating, and copying HTML elements and style.

HOW TO…?

Javascript can be mixed/embedded inside HTML or be referenced to a separate file. To follow the Model View Controller principles, it is a much better habit to separate the code from the content. The way to reference Javascript external files is as simple as making reference to a link or an image:

<script src="myScript.js"></script>

Embebbing Javascript is better for testing purposes, once you’ve got the output you were looking for, you better take it apart. Any way, if you want to keep this method, the recommendation suggests to put all the script in the bottom, inside the <body> tag, so the display time is reduced.

QUICK START

This is a collage of the JS most used functions:

 

<!DOCTYPE html>
<html>
<body>

<input type="number" id="num" require />
<button type="button" onclick="draw()">DRAW A TRIANGLE!</button>
<p id="triangle"></p>

<script>
    function draw() {
    var number = document.getElementById("num").value;

    if(number >50)
        document.getElementById("triangle").innerHTML = "The number is too high!";
    else
    {
        var result = "";

        for(var row=0; row<number ; row++)
        {
            for(var col=0; col<=row ; col++)
                result = result + "x";
            result = result + "<br \>";
        }

        document.getElementById("triangle").innerHTML = result;
    }
}
</script>

</body>
</html>

 

Let’s check what we are doing in our script:

1. VARIABLE DECLARATIONS

We get started declaring a variable named “number”, where we store the value set by our user in the HTML element with the id “num”.

2. IF /ELSE

Next, if the number is higher than 50, we write into the paragraph a message “The number is too high!”. If the number is valid, we execute the next block of code…

3. FOR CYCLE

We count from zero to the number and hold our count in the variable Row; inside there, we start a cycle for Columns counting from zero to the value of row: everytime we advance in our Columns we add an “x” to our result and everytime we’ve finished that inner cycle, we finish up the Row-Cycle adding a line-break. So in the end we have a triangle drawn with x’s.

This is the view at my computer’s text editor and the demo running in my browser:

js demo

js demo

 

Performance vs Privacy

Javascript is one of the most used programming languages in the world, it is the main programming language of the Web. Javascript is executed in the client side, this means right into the navigator, so everybody with access to your HTML, has also access to your Javascript: there is a bunch of programming languages that can be executed in the “server-side”, so the information you are manipulating can stand as private. The big difference here is that, as Javascript is executed in the same computer where HTML is displaying, it turns to be much faster than server-side languages: Javascript doesn’t need to travel through all the web everytime it needs to perform an action.

So Javascript is a very nice language you can use whenever it doesn’t involve your company’s privacy for example. Those other languages, server-side, can be delegated to Database consults and stuff like that.

There is a lot of magic you can so with Javascript, we have the W3Schools tutorial, a lot of documentation and free online books about it through all the Web and it has a lot free libraries, JQuery and Ajax are the most popular among them. Have a nice week. You can see the triangle demo and further examples at my JS repo. See you soon in ProgSchedule ;)