The “Everything” Controllers. Operative Systems.


Operative Systems.

The answer about what are the Operative Systems, requires remember some concepts. So far we have spoken about components of any computer system and did a basic division whose components are the Hardware and Software.

We also saw the definitions and variety of both components and also did a tour of their functions in the system We also saw the definitions and variety of both components and also did a tour of their functions in the system.




Comes to see how the system works and for that we have to define what is an operative system and its important role in the interrelation:

operating systems , computing, networks, operative system

The operative system is a piece of Software that interact to all components of the Hardware (CPU and Input/Output devices) together with Software applications and both with the system user.

It is important to note that when we say “interact” we are talking about communicating and interpreting physical and virtual functions of operation and from the three components of the system.

In other words, we cannot speak of a computer system, without the presence of an operating system. Its most important functions are:

  • Structuring the file system to be used by the Software that is installed.
  • Accommodate all drivers programs for the input/output peripherals connected to your computer (drivers).
  • Serve as program of the boot of the system, and keep monitoring the performance of this.
  • Host a series of intrinsic system tools to solve the problems of compatibility between the installed Software and input/output peripherals.
  • Manage the system resources such as processor usage and management of the working memory and information storage systems.

Major classifications of Operative Systems.

They can be classified mainly depending on:

  • Number of users: multi-user or single-user
  • Number of tasks simultaneously: multi-tasking or single-tasking
  • Runtime: real time and deferred time.

Most  Relevant Operating Systems.

Over time many operative systems have been generated by different Software makers in partnerships with Hardware manufacturers, and most of them have tried to generate a market standard. Systems that have endured over time, are those who have achieved more support by developers of commercial Software..




According to different platforms, large groups that have endured over time are:

MS-Dos: It was the first standard in the market and practically captured the attention of manufacturers of Hardware and Software Developers of the age.

Windows: Since its first launch with Windows 3.1 up to the current Windows 10, including Windows NT network, this family of operative systems, thanks to its graphical user interface, has been the most popular of all operating systems.

Mac Os: In all its versions, and because the exclusivity in the popular brand Apple computers, has been one of the most widely used operative systems.

Unix: Including its free version: Linux, this operative system has continued to gain in popularity thanks to its versatility handling graphics, texts, and mathematical operations, in addition to its multi-tasking and multi-user ability.

And in the most recent mobile platforms include:

Android, the consent of Google. the IOS for mobile devices from Apple and to a lesser degree the Blackberry BBOS.

android operative systems

It is important to note that there are currently very few exclusive applications to one or another operative system, since most of the large companies in Software development, as well as a large number of independent developers, has worked on the adaptation of their programs to different operative systems.

Is it the developers of applications for mobile, those who have not renounced their diversification into all operating systems with their corresponding markets.

Even exist in the Software market, applications that simulate a (virtual) operating system, running from one platform with another installed.



Custom Search

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Algorithms and Flow Diagrams for Computer Programming.


Computer Programming.

No matter which programming languages, software developers are planning to use, reasoning and sequencing of their work should be the same. Remember, their job is to make the computerrun a routine” to give to us, the users, the results we are looking for. Do this job, requires that the Developers use two important tools: the Formation of Algorithms and the Flow Diagrams.




Operation of a Program.

Now, consider the way in which a computer solves a problem.

  • Firstly, there must be a list of instructions written by the man to solve this problem, which we know under the name of program, and must be stored in the resident memory of the computer.
  • To the order of commencement of implementation of the program, the control unit is responsible to execute the instructions in the order determined by the program, by transferring the unit logic-arithmetic the execution of operations and comparisons, and receiving of the intermediate and final outcomes.
  • Once finished the process of calculations, the control unit proceeds to send these results to the units of output for use by the user. This procedure will be executed as many times as we tell it is the computer and always in the same way.

computer programming algorithms

We must always bear in mind the fact that the computer only does what the user ordered. This means that the mistakes that may exist in the results provided by the computer, are merely consequences of mistakes made in the program that is running or data entered by users.

The Formation of Algorithms.

As a general rule, there are three steps to be followed for the solution of a problem through the computer:

  1. Determine the problem we want to solve.
  2. Develop the finite model of the problem.
  3. Based on the finite model, get an algorithm.

The determination of problems not always is as easy as it might seem. Often the omission of this step has caused the loss of a large number of hours of work, for not having previously defined objectives wanting to achieve.

This work is usually set between the user and the programmer, and no other steps should get ahead without having previously concluded it.

The development of the finite model basically consists of the collection of all the elements involved in the variations of the problem (called variables), and determination of the mathematical or logical relationships among them.

Finally, programmer must analyze the obtained model and work on the formation of the algorithms. This term defines a series of instructions in a given sequence, necessary to describe the operations that lead to the solution of a problem.




Flow Diagrams.

An algorithm approach can be done in many ways, however, in computing is often used as graphics to be such as to allow a better visualization of the sequence in which instructions that make up this algorithm must be run.

Representation graph of  algorithms, known by the name of Flow Diagrams, and consists of a set of symbols that are connected to each other, representing the sequence and type of process to be held in each of the stages of solving a problem.

The symbols most commonly used to make a flowchart, are as follows:

DIAGRAMMING SYMBOLS

Then, we can see an example of very basic flowchart that defined an algorithm to make a machine to make separation and selection of two different coins and present a result on the number of coins by type.

FLOW DIAGRAM



Custom Search

Guardar

Guardar

Guardar

Something about Programming Languages


Programming Languages.

There is a single programming language which any computer can really understand and run: his own native binary machine code. This is the lowest level of the language in which it is possible to write a program to communicate with a computer. All other programming languages are called from highest level or lower level in proportion to how much away or closer to the binary code of the machine.

Accordingly, the lowest level language corresponds completely to machine code, so a low level language instruction results in a single machine language instruction. However, a high level language instruction usually translates into several machine-language instructions.

Software HostingLow level languages have the advantage that can be written to take advantage of the peculiarities of the architecture of the central processing unit (CPU) which is the “brain” of any computer. So, a program written in a low level language can be very effective, making optimal use of the memory of the computer and processing time. However, writing a low-level program requires a considerable amount of time, as well as a very clear understanding of the inner workings of the microprocessor. Therefore, low level programming is typically used only for very small programs, or to segments of code that are very critical and must operate as efficiently as possible.

High level languages allow a development much more quickly of large programs. The final program executed by the computer is not as efficient, but the advantages of saving in time of the programmer usually outweigh the disadvantages of the inefficiencies of the program. This is because the cost of writing a program is nearly constant for each line of code, regardless of the language used. Thus, a high-level language where every line of code is equivalent to 10 machine-language instructions, slope only tenth part of what it would cost to set the 10 lines of code in machine language.




The first high-level programming languages were designed for specific kinds of tasks. Modern languages are most for general use. In any case, each language has its own characteristics, vocabulary and syntax. The most important are the following:

FORTRAN.

It was one of the first programming languages. FORTRAN (FORmula TRANslator acronym) was designed to handle mathematical operations, originally on mainframe computers. Although it was very efficient in mathematical calculations, FORTRAN was unable to handle any text and could hardly place titles and identifiers in their printed outputs.

programming-languagesdif

 

COBOL.

Its name is an acronym for COmmon Business Oriented Language. COBOL in its conception, is almost the opposite of FORTRAN. COBOL was designed to facilitate output written text-heavy in business data processing applications and using English to their output reports. It was conceived to handle textual data of business and its math abilities are restricted to little more than operations with money and percentages.

PASCAL

It owes its name to Blaise Pascal, philosopher and physicist French, and one of the fathers of the automation of the calculation. PASCAL was designed specifically as a teaching language. The purpose was to help the student to properly learn the techniques and requirements of structured programming. It was originally designed to be used on any platform. I.e., a PASCAL program could be compiled on any computer, and the result would run correctly on any other computer, even with a different or incompatible microprocessor type. The result was a language relatively slow, but work and fulfilled its mission while it was fashionable.

BASIC

Its name is the acronym for Beginner’s all-purpose Symbolic Instruction Code (all-purpose symbolic instruction code) and was the first interpreter language designed for general use. It was the programming language most used and today evolved as Visual Basic to work under Windows environment.

FORTH

It is a compiler and interpreter that was originally developed for the management of operations in real time and also allowing the user control it and make changes quickly. The name FORTH originated the concept of 4th generation language and had the restriction of only allowing file names of up to 5 characters.

C

Originated as A so-called experimental language that was then improved, corrected and expanded and was called B. This language at the same time was improved, updated, refined and eventually was called C. The C language has proved to be very versatile and surprisingly powerful. It is a very simple language and is nevertheless able to make large-scale developments. The operating system Unix, which has been adapted to a wide range of platforms and has continued to gain popularity, is written in language C.

C++

When the concept of objects and object-oriented programming development, C language standard had no internal structure to handle it. However, C was, and still is, very helpful so it has remained in use. Further development but a kind of extended C language that was originally called “C and somewhat” or C-plus (C +). Subsequently further they developed the concepts of Object Oriented Programing and C +, C ++ evolved in.

JAVA.

The constant search for a platform-independent language, gender the emergence of Java. This is the most recent language designed with that objective. Any computer with a Java runtime environment, can run a program written in this language, which makes it, so far, the Universal language par excellence.

One of the most useful aspects of Java is that web browsers are now designed to be able to incorporate small Java applications, or “applets” in Web pages. Other Java programs, called “servlets” run on Web servers. This allows additional communications between the page and the server and provides a high degree of interactivity and dynamic page generation.

The disadvantage of using the applets in this way is that they inherently run much more slowly than the programs developed in the computer.

JAVASCRIPT.

In most aspects, JavaScript is an interpreted version of Java. Its most common use is in Web pages, where it can be used to provide interactivity and dynamic response. Some Web servers may also make use of Javascript to generate dynamic pages.

PERL.

The Practical Extraction and Report Language, Perl, is very similar to the C language in many ways. However, it has a number of features that make it useful for a wide range of applications. The most popular use is Perl CGI programming (Computer Generated Imagery) or Generation Computer Imaging for Web application development. For example, generally, when you fill out and send a form via the Internet, most likely that this program written in Perl.

However, that is not the only use of PERL, because it is also an excellent programming language which allows the rapid development in a wide range of platforms.

 HTML.

The acronym stands for Hypertext Markup Language, made reference to the “markup language” for the preparation of Web pages. It is a standard that serves as a reference of the software that connects with the development of web pages in its different versions, defines a basic structure and one code (called HTML code) for the definition of the content of a web page, such as text, images, videos, games, among others.

There are also another large number of programming languages for application specific and of which only make mention, such as PHP, LISP, ADA, RUBY, Python and HASKELL.





Turning the machine languages on high level language.

Programming Languages we saw above, they are all considered High-Level Languages, and as we mentioned, Computers, as electrical equipment at end, only understand the basic language, the lowest level of zeros and ones, as representation of the passage or not the electric current by its millions of circuits.

binary

To convert the low level languages to high-level languages, there are tools to specific programming that they do the job. Let’s talk about them.

Assembly language.

Assembly language is not more than a symbolic representation of machine code, which also allows the symbolic designation of memory locations. Therefore, an instruction to add the contents of a memory location to an internal record of the CPU, called accumulator, might add a number instead of a string of binary digits (bits).

No matter how close this language to machine code assembler, the Computer still can not understand it. The Assembly language program must be translated into machine code by a program called assembler. The assembler program recognizes the strings that make up the symbolic names of the various operations of the machine and replaced the machine code required for each instruction.

At the same time, also calculates the desired direction, in memory, each symbolic name of this location, and replaces the addresses with the names. The end result is a program of machine-language that can operate by itself at any time.

When you reach that point, the Assembly and the Assembly language program are no longer necessary. To help distinguish between “before” and “after” versions of the program, the original assembly language program is also known as the source code, while the final machine-language program is designated the object code.

If a program of language assembler needs to be changed or corrected, it is necessary to make changes to the code source and reassemble it to create a program object.

Compiler Language.

Compiler language equivalent to the high level of Assembly language. Each instruction in the compiler language can correspond to many machine-language instructions. Once the program has been written, translates to a program called compiler equivalent machine code. Once the program is compiled, the resulting machine code is stored separately and can be run by itself at any time.

As is the case with Assembly language programs, update or correction of a compiled program requires that the original program or source, is modified appropriately and then compiled to form a new object or machine language program.

In general, the compiled machine code is less efficient than the produced code when using assembly language. This means that it works a bit slower and uses more memory than required for mounting the assembler program. To compensate this drawback we have the fact that it takes much less time to develop a program of compiler language, that required for an assembly language program.

Interpreter Language.

Interpreter language, like a compiler language, is considered high level. However, it works completely differently to the of these. Interpreter program resides in memory and run the program of high level without previously making the translation to machine code.

Using an interpreter program to directly run the user program has advantages and disadvantages. The main advantage is that you can run program to test its performance, make some changes and run again in a direct way. I.e., there is no need to recompile because there is never any new machine code. This greatly accelerates the development and testing process.

As a disadvantage, we have that this provision requires that both the interpreter and the users program reside in memory at the same time. In addition, since the interpreter has to only scan one line of the program of the user at the same time, and also run internal parts of itself in response, an interpreted program execution is much slower than a compiled program.



Custom Search

Guardar

Guardar

Numbering Systems. Binary, Hexadecimal


Numbering Systems.

Humans use as reference counting, a numbering system based on the number ten (10). This numbering system is known as the DECIMAL system, and has its origin in the use of the fingers of the hands as Assistant principal of count. For number greater than ten, the decimal system, although a little erratic at the beginning (eleven, twelve, etc.), regulates subsequently, with the use of consistent groups (twenty, twenty-one, thirty, thirty-one, and so…).




In the case of computers, such as electrical equipment for the purpose, the count is slightly different since limitations allow only can to distinguish States of a component such as a switch with its OFF and ON positions.

In this way, we have the conventional representation used for the interpretation of these conditions, is to use the number 0 to symbolize the condition of OFF, and the number 1 for the ON condition.

Numbering binary System

This system of numbering which based their principle in the use of two digits, we will call binary system, and from this moment we must accept it as the only way to “direct” communication that understands one computer either.

Let’s see now how it is carried out numerical interpretation, by a computer, using this elementary numbering system.

Software HostingWe had previously defined to the memory of the computer as a series of boxes composed of groups of BITS (eight in our case), and who called BYTES. Each of these BITS could take any one of the values 0 and 1, allowing us to get “words” like for example:

                                                00011001

From the point of view of the binary numbering system, this word is known as a “string of binary digits”, and corresponds to a value in the DECIMAL numbering system.

The correspondence in the decimal system, of any binary number, is obtained by applying the concept of numerical BASE. In general, for any numbering system, can be defined as BASE numeric number that gives origin to the beginning of the count. Thus, for the DECIMAL system we can talk about BASE 10, system binary will use BASE 2, etc.

For example, the decimal number 743 can be represented as:

(7X102) + (4X101) + (3X100)

= 700    +   40      +     3

= 743

Similarly, a binary number as represented by the string 00011001 can be carried to its decimal equivalent using the same principle:

(1X24) + (1X23) + (0X22) + (0X21) + (1X20)

= 16 +      8     +     0     +     0     +      1

= 25

As we can see, the binary systems and DECIMAL work similarly, except that the number of digits required to express any number, is greater in the event that we do so with binary digits. Indeed, if we had used decimal digits for the expression of number 25, we would have only had to use 2, while to express this issue using binary digits (11001) require 5.

Obviously, while the computers only understand and operate with binary numbers, its big advantage over human beings is the speed which perform enormous amounts of operations per second, still working with those long strings of zeros and ones that are so difficult to understand for us and much more, to memorize.

Because of this, man has devised different notation systems that contribute to an easier communication with computers. One of these systems is based on the use of the numbering system HEXADECIMAL, which consists of the use of the numerical base 16.

The following table shows the elements that make up the hexadecimal numbering system, as well as his correspondence with elements of the decimal and binary numbering systems .

HEX.                         DEC.                      BIN.

0……………………………..0……………………….0000

1……………………………..1……………………….0001

2……………………………..2……………………….0010

3……………………………..3……………………….0011

4……………………………..4……………………….0100

5……………………………..5……………………….0101

6……………………………..6……………………….0110

7……………………………..7……………………….0111

8……………………………..8……………………….1000

9……………………………..9……………………….1001

A……………………………..10………………………1010

B……………………………..11………………………1011

C……………………………..12………………………1100

D……………………………..13………………………1101

E……………………………..14………………………1110

F……………………………..15………………………1111

               TABLE 1




Here is an example of numerical representation in the case of the hexadecimal number 2F:

2F        =         (2X161) + (FX160)

=              32    +  (Fx1)

=           32    +    15

=          47 in decimal.

Number 47 decimal is represented as 101111 in binary, and is clear that the use of 2F is much easier to visualize and understand.

While it is true that through the use of hexadecimal numbering can either be directly understood by the computer, it is not less that operators can facilitate his work of conversion because of the peculiarity of the relationship between the two systems, which consists of the following:

  1. The base of the binary system is 2
  2. The base of the hexadecimal system is 16
  3. The fourth power of 2 is 16.

Together with the fact that a byte can be divided into two groups of 4 bits, and these groups to allow the application of the following rules:

  • To convert hexadecimal to binary, you have to convert each hexadecimal digit by the four corresponding bits in binary (see table 1).
  • To convert binary to hexadecimal, you should break the binary number into groups of four bits (from right to left) and then replace each of the groups by the corresponding hexadecimal digit.

In this way, and if we take into account that in our case we have as reference unit a byte (eight-bit group), we will have to “casually”, the greatest number in binary within the byte (11111111), coincides with the greatest number expressed by a number (FF) two-digit hexadecimal, it is 255, the greatest number possible expressed serious 1111111111111111 (16 numbers one) and this would be represented in hexadecimal as FFFF is not more than 65536.

As we can see, there is a curious relationship between these numbers that we wanted to illustrate. For example, when referring to the memory capacity of a computer, we had said that this average in kilobytes and its multiples, and that these were nothing more than a group of multiples of 1024 bytes. This indicates, that if we are talking about a computer with 64 K we are referring to a real capacity of 1024 X 64 = 65536 bytes…

All these “coincidences” have a logical explanation, but enter more deeply into such explanations, it would involve too much away from the objective of this website, so we leave them for a later opportunity.



Custom Search

Guardar

Guardar

What is the Software and how it Works?


Software… What it is?

Computers needs to work a series of instructions that we previously defined as programs, which could be their own system (hosted in the ROM memory), or the entered by the user (housed in RAM). All of these programs that define the logical operation of the system are known as SOFTWARE, and that tries to express the non tangible part of the system.




Computers, contrary to what many may think, and except in Artificial Intelligence applications, are unable to act on their own. Its way to solve problems or make decisions is only a reflection of the criterion having the person who prepares the programs that the computer is running. These programs are known as user programs, and are nothing more than instruction sets introduced the computer to run in an orderly fashion and in perfect sequence.

Data or job information are all that information generated by the user (or the program) that is used to perform the calculations or logical analysis.

Programs of the system are those who are responsible for coordinating the operation of the computer and are defined by the manufacturer through wired instructions, and hence permanent.

Software HostingThe internal memory of the computer is made up of two types of memory, memories of “read only” or type ROM (Read Only Memory), and memories of “random access” or RAM (Random Access Memory) type.

The ROM is the resident memory to accommodate the ongoing programmes of the system and are cannot be altered (usually) or deleted, although the computer is de-energized. These programs are so called firmware that is a software indelible hosted on the ROM.

The RAM is an area of memory for general use of the user where the information, be it data or programs, may be stored or retrieved when required. Originally, all memories were a volatile type, which meant that the contents of the RAM memory was lost whenever we de-energize the computer.

ram memory I

The emergence of the so-called non-volatile or permanent memories once again revolutionized the world of personal computers. Of course, also involved which might appear in portable computing devices, as well as more diverse media storage and digital information management.

Normally, the memory can be displayed as a group of boxes (also called locations), numbered 0 onwards. Number of location is known as “address” and has as a main characteristic being unique, and always the same, inside each computer. Each memory location stored a “word” which in turn is divided into BITS.

A BIT is defined as the basic unit of binary information, and can have only values 0 and 1. Its name is derived from the resulting contraction of thewords “BInary digiT”.

 

 

ROM

The number of bits that make up a “word” is based on the architecture of the microprocessor and although initially the standard was 8 bits. No much later, capabilities were growing at 16, 32, 64 bits, reaching nowadays sizes of up to 256 bits. For our discussion we will use a size of 8 bits receiving the name of “BYTE”.



Other term which we believe should clarify, are the widely used symbols “K”, “M”, “G” and “T” when referring to memory capabilities. These quantitative representations of the memory of a computer, are nothing more than the result of a partnership with the prefix “Kilo”, “Mega”, “Giga” and “Tera” of the metric system, used to represent groups of 1,000, 1,000,000 1,000,000,000 and 1,000,000,000,000 units.

In the event that we are using for the explanation, the prefix “Kilo” used in the term kilobyte, does not mean exactly 1000 bytes but 1024. This issue, as we will see later, is the result of raising the number 2 to the tenth power (2¹º). This means, that when we refer to the memory capacity of a computer using the term 16K, what we want to say is that this computer is able to store 16,384 “words” of eight Bits each, or what is the same, 16,384 bytes.



Custom Search

Guardar