One of the axioms of cyber security is that although it is extremely important to try to prevent intrusions into one’s systems and databases, it is essential that intrusions be detected if they do occur.
An intruder who gains control of a substation computer can modify the computer code or insert a new program. The new software can be programmed to quietly gather data (possibly including the log-on passwords of legitimate users) and send the data to the intruder at a later time.
It
can be programmed to operate power system devices at some future time
or upon the recognition of a future event. It can set up a mechanism (sometimes called a ‘‘backdoor’’) that will allow the intruder to easily gain access at a future time.
If no obvious damage was done at the time of the intrusion, it can be very difficult to detect that the software has been modified.
For example, if the goal of the intrusion
was to gain unauthorized access to utility data, the fact that another
party is reading confidential data may never be noticed. Even when the intrusion does result in damage (e.g., intentionally opening a circuit breaker on a critical circuit), it may not be at all obvious that the false operation was due to a security breach rather than some other failure (e.g., a voltage transient, a relay failure, or a software bug).
For these reasons, it is important to strive to detect intrusions when they occur. To this end, a number of IT security system manufacturers have developed intrusion detection systems (IDS). These systems are designed to recognize intrusions based on a variety of factors, including primarily:
Communications attempted from unauthorized or unusual addresses and
An unusual pattern of activity.
They generate logs of suspicious events.
The owners of the systems then have to inspect the logs manually and
determine which represent true intrusions and which are false alarms.
Photo by Cryptango - securing industrial communications
Unfortunately, there is no easy definition of what kinds of activity should be classified as unusual and investigated further.To make the situation more difficult, hackers have learned to disguise their network probes so they do not arouse suspicion.
In
addition, it should be recognized that there is as much a danger of
having too many events flagged as suspicious as having too few.
Users will soon learn to ignore the output of an IDS that announces too many spurious events. (There
are outside organizations however that offer the service of studying
the output of IDSs and reporting the results to the owner. They will
also help the system owner to tune the parameters of the IDS and to
incorporate stronger protective features in the network to be
safeguarded.)
Making matters more difficult, most IDSs have been developed for corporate networks with publicly accessible internet services. More research is necessary to investigate what would constitute unusual activity in a SCADA=SA environment.
In
general, SA and other control systems do not have logging functions to
identify who is attempting to obtain access to these systems. Efforts
are underway in the commercial arena and with the National Laboratories
to develop intrusion detection capabilities for control systems.
Summary
In summary, the art of detecting intrusions
into substation control and diagnostic systems is still in its infancy.
Until dependable automatic tools are developed, system owners will have
to place their major efforts in two areas:
The Cisco Configuration Professional (CCP) application is a
GUI based management tool for the Integrated Service Routers (ISR); it
takes the place of the former Security Device Manager (SDM) application
that existed on previous iterations of the Cisco router product lines.
It makes the configuration and troubleshooting of an ISR easier for
those not familiar and/or comfortable with the Cisco IOS CLI.
Let’s install CCP using a Windows-based operating system and the following steps.
The first thing you must do is download a copy of CCP from the Cisco website
(CCO login required). Make sure to download CCP and not CCP express;
CCP express offers a limited set of options compared to CCP, and
installs on the ISR device itself.
Installing CCP
As shown in Figure 1, the filename for CCP begins with cisco-config-pro-k9-pkg
and a specific version number; as of this writing the most up to date
version of CCP is 2.6 but this will obviously change over time. Figure 1
Once you download CCP to the local machine, to begin installation
just double-click or press enter while the file is selected. Once done,
the installer will launch and eventually bring up a window that looks
like Figure 2.
Simply click ‘Next’ to begin. Figure 2
The next window will prompt the EULA for CCP (Figure 3). Take a glance at it and click to accept the terms of the license. Figure 3
The next window will bring up the installation location option
(Figure 4). By default CCP installs in the ‘Cisco Systems’ folder under
‘Program Files’ (or Program Files (x86)); either use the default or
click ‘change’ to change it to meet the specific requirements for the
local machine and click ‘Next’. Figure 4
Once at the next window (shown in Figure 5) simply click the ‘Install’ button to begin installation. Figure 5
The installation will now progress, once it is complete the windows
shown in Figure 6 will display; from this window select whether an icon
for CCP should be installed on the desktop and click ‘Next’. Figure 6
The installer will then run a quick check of the requirements to run
CCP. Once this is done it will display the results as shown in Figure 7;
make sure that these requirements are met before running CCP. Once the
requirement results have been read, click ‘Next’. Figure 7
And finally CCP installation is complete as shown in Figure 8. If all
the requirements for the running of CCP are met, then it is now
possible to run CCP directly from the installer by clicking the ‘Run
Cisco Configuration Professional’ check box. Select the appropriate
options and click ‘Finish’. Figure 8
Getting started with Cisco CCP
Before going any further with the CCP GUI, the device being managed
must be configured with a few commands from the Cisco IOS CLI. Figure 9
router(config)#ip http server (the insecure method)
router(config)#ip http secure-server (the secure method)
router(config)#ip http authentication local
router(config)#line vty 0 4
router(config-line)#login local
router(config-line)#transport input telnet (the insecure method)
router(config-line)#transport input telnet ssh (the insecure and insecure method)
router(config-line)#transport input ssh (the secure method)
If the device has already been configured at the CLI then CCP can
launch immediately. During the first launch a Windows Security Alert
may display asking if it should add an exception to the Windows firewall
(Figure 10). Choose the appropriate options for your environment. Figure 10
As it’s launching, Java may prompt with a warning asking if the CCP
application is allowed to run (Figure 11); click ‘Run’ to continue. Figure 11
Now the ‘Manage Community’ window will display as shown in Figure 12,
it is at this point where the target devices to be managed are entered
with their IP address/Hostname and Username/Password credentials. If the
secure options were used when configuring the device’s initial
configuration through the CLI then don’t select the ‘Connect Securely’
checkbox; if the secure options were used then select the ‘Connect
Securely’ checkbox. When all the devices intended to be managed are
entered click ‘OK’. Figure 12
The next step requires a discovery process. During this process, CCP
will interrogate the devices and makes sure the device is accessible and
supported. Select all the devices listed and click ‘Discover’. Figure 13
If the secure methods were used a Security Certificate Alert will be
prompt; this is because by default a self-signed certificate is created
on the device and must be allowed by the local managing device (the
computer running this installation). Figure 14
Should there be a problem with the discovery process, you may see a
‘Discovery failed’ message. If this happens, check to make sure that all
the required Cisco IOS CLI configuration steps. There is also a
‘Discovery Details’ button which you can click to check specific
problems reported. Figure 15
If all goes well a ‘Discovered’ status will be given. Once this
occurs a specific device can be selected from the ‘Select Community
Member’ in the upper Left of the window. Figure 16
Once a member is selected the Configure and Monitor options shown in
the top left will also now be accessible. Figure 17 shows some of the
menu options enabled when the Configure option is selected. Figure 17
From this point the user is able to configure whatever options are supported by the device and the supported license package.
Not too complicated
The CCP installation is not overly complex and can be easily
completed by even the most novice Windows and/or Cisco user. Hopefully
this article’s walkthrough will make the process easier to follow and
get CCP up and running so that your equipment runs as fast as possible.
His curious discovery, 200 years ago, foresaw our expanding universe.
By Kitty Ferguson
Illustration by Miko Maciaszek
March 20, 2014
The lights in the sky above us—the sun,
the moon, and the panoply of countless stars—have surely been a source
of wonder since long before recorded history. Ingenious efforts to
measure distances to them began in earnest in the 3rd and 4th centuries
B.C., and astronomers and astrophysicists today, with high-powered
telescopes and computers, still ponder the universe and attempt to tease
out answers to millennia-old questions.
But one of the most
significant discoveries in this inquiry was not made with a high-powered
telescope or a computer, or by anyone peering at the sky. Two hundred
years ago, Joseph von Fraunhofer, a Bavarian glassmaker and researcher,
experimented in his laboratory with simple equipment and detected dark
lines in the spectrum of sunlight. He had no way of knowing that this
curious discovery would allow future scientists to calculate the
distances of stars and precipitate one of the most momentous advances in
the history of all science—the recognition that the universe is
expanding.
Joseph
Fraunhofer was born on March 6, 1787, in Straubing, in lower Bavaria. On
both his father’s and his mother’s sides, his forebears had had links
to glass production for generations. Joseph, the youngest of 11
children, likely worked in his father’s shop. When Joseph was 10, his
mother died; his father died a year or two later, and Joseph’s guardians
sent him to Munich to apprentice with the glassmaker Philipp Anton
Weichselberger, who produced mirrors and decorative glass for the court.
This should have been an enviable apprenticeship, but Weichselberger
was a harsh master who gave his apprentices menial tasks and taught them
little about glassmaking. He prevented Joseph from reading the science
books he loved by refusing him a reading lamp at night and forbade his
attending the Sunday classes that offered Munich apprentices some
education outside the trade.
Joseph endured two years of this
misery, but then his story took a turn that could have come from a
Charles Dickens novel. Weichselberger’s house collapsed, burying Joseph
underneath. His rescue was dangerous and took several hours, giving
prince-elector Maximilian IV time to arrive on the scene. The accident
made Joseph the city’s hero, and a still-existing woodcut in Munich’s
Deutsches Museum shows Maximilian, arms outspread, welcoming the boy
back to life. Maximilian invited Joseph to his castle and put him in the
care of his advisor, industrialist Joseph von Utzschneider.
Utzschneider, realizing that this lucky young man was bright and had a
thirst for knowledge, supplied Joseph with books on mathematics and
optics.
Maximilian gave Joseph a generous gift that was
sufficient to buy him out of his apprenticeship and purchase an optical
grinding machine. Then Joseph set up a small business engraving visiting
cards, which failed to supply him with a living. Without a source of
income, and perhaps realizing that an apprentice was not wise to depart
from the established route into his craft, he returned to
Weichselberger, working for him during the week and for an optician,
Joseph Niggl, on Sundays. Weichselberger still did not allow him his
reading lamp.
Eventually, Utzschneider took things in hand, saw
to it that the boy was supplied with books and the time and light to
read them, and arranged for Ulrich Schiegg, a Benedictine pastor with
considerable scientific interest and education, to mentor him. When
Utzschneider judged that Joseph was sufficiently prepared, he recruited
him to work in Utzschneider’s own Optical Institute in Benediktbeurern,
where Joseph assisted in the manufacture of telescope lenses and
surveying instruments. When he was still in his early 20s, Utzschneider
put him in total charge of the glass works at the Institute.
The
improvement of lenses for telescopes and surveying instruments was a
major goal of the Institute, and it was not long after his arrival that
Fraunhofer began to focus on more basic research that underlay this
effort, research having to do with the nature of light and its
refraction. In 1807, at age 20, he submitted his first major scientific
paper.
In 1814, at age 27, Fraunhofer was working in his
laboratory to make more accurate measurements of the manner in which
different types and configurations of glass refract light. The fact that
a prism transforms ordinary white light into a rainbow of colors had
been known since antiquity. But the assumption had been that the colors
are somehow in the prism. Isaac Newton, in the 1660s, had shown that
white light is composed of colors that spread out in an ordered
sequence—the spectrum—red, orange, yellow, green, blue, indigo, and
violet. Different wavelengths of light are responsible for the different
colors. The longer the wavelengths, the further toward the “red” end of
the spectrum. The shorter the wavelengths, the further toward the
violet or “blue” end.
Though modern science finds minute
variations in the speed of light in a vacuum or empty space, for most
purposes it’s safe to assume that the speed in such situations does not
vary. Not so for the speed of light moving from one medium to another
(air to water, for example). The “refractive index” of a medium
indicates how the speed of light moving through that medium differs from
the speed of light as it moves through another.1
When
a beam of white light passes through a prism, the colors in the light
do not all bend equally, because the refractive index of a material (in
this case, whatever the prism is made of) differs slightly for different
wavelengths of light. The shorter the wavelength, the greater the
strength of the refraction. As the white light splits into visible
colors, red light bends least; violet light, most.
The
fact that a prism transforms ordinary white light into a rainbow of
colors had been known since antiquity. But the assumption had been that
the colors are in the prism.
One obstacle
Fraunhofer and other researchers of his time faced was that the colors
in the spectrum are not sharply separated from one another. Looking
closely at the spectrum produced by light emerging from a prism, a
researcher cannot judge precisely where red changes to yellow, for
example. The colors blend off one into the next. Experiment after
experiment proved unsuccessful in solving this problem, but among
Fraunhofer’s attempts there was one result that particularly intrigued
him.
Using as his light source a flame made by burning alcohol
and sulfur, he saw that when this light passed through his prism, the
result was a clearly defined bright line in the orange region of the
spectrum. His curiosity aroused, Fraunhofer repeated the experiment
using the sun as his source of light, to find whether the spectrum would
show similar lines. Newton had studied the spectrum of light by
allowing sunlight to enter through a small round hole in a shutter, pass
through a prism, and fall on a screen. For Newton’s round hole in the
shutter, Fraunhofer substituted a narrow slit, and for Newton’s screen
he substituted a surveying instrument designed to measure angles, known
as a theodolite telescope.
As he reported, “Looking in this
spectrum for the bright line that I had found in a spectrum of
artificial light, I discovered instead an infinite number of vertical
lines, of different thicknesses. These are darker than the rest of the
spectrum, some of them entirely black.”2
The lines remained the same when he adjusted the window-shutter slit or
made various adjustments to the spacing of his equipment, ruling out
the possibility that the lines were a product of his experimental
apparatus. They were a property of solar light itself.
In
groundbreaking papers, Fraunhofer announced his discovery that the
spectrum of light from the sun is interrupted by many dark lines, and
that these lines are present in all sunlight, both direct and reflected
from other objects on Earth or from the moon and the planets. He labeled
the ten most prominent lines in the solar spectrum and eventually
reported that he had found 574 lines.
Continuing to investigate,
Fraunhofer detected dark lines also appearing in the spectra of several
bright stars, but in slightly different arrangements. He ruled out the
possibility that the lines were produced as the light passes through the
Earth’s atmosphere. If that were the case they would not appear in
different arrangements. He concluded that the lines originate in the
nature of the stars and sun and carry information about the source of
light, regardless of how far away that source is. Fraunhofer did not
know what that information would be, how the lines would serve the
future, or that “Fraunhofer lines” would become a household term in
science.
Fraunhofer was a busy and effective entrepreneur, and
under his leadership the Institute became a leading manufacturer of
telescopes. He wrote in his memoirs that, “In making the experiments… I
have considered principally their relations to practical optics. My
leisure did not permit me to make any [other experiments] or to extend
them farther. The path that I have taken… has furnished interesting
results in physical optics, and it is therefore greatly hoped that
skillful investigators of nature would condescend to give them some
attention.” They certainly would!
Each of the lines represents a particular element and the strength of a line is related to the abundance of that element.
Yet
in his own lifetime, Fraunhofer failed to receive as much recognition
as he deserved from his peers. Eminent researchers such as Hans
Christian Ørsted and John Herschel visited him at the Institute, but
others regarded him as a mere artisan, or were offended by the excessive
secrecy practiced at the Institute to protect its monopoly.
Bavaria
eventually chose to celebrate her native son. In 1821, after heated
debate over his complete lack of academic training, the Royal Bavarian
Academy of Sciences appointed him “extraordinary visiting member.” Two
years later he became curator of their physics collection. In 1822, the
University of Erlangen awarded the self-schooled Fraunhofer an honorary
doctorate. In 1824, Fraunhofer became von Fraunhofer when King
Maximilian I Joseph dubbed him a Knight of the Order of Civil Service of
the Bavarian Crown. The city of Munich marked the occasion by giving
him relief from paying city taxes.
Portraits depict von
Fraunhofer as a well-appointed, lively man, but he was always somewhat
frail. His work in the glass furnaces with poisonous lead oxide probably
contributed to his death, in June 1826, from “lung tuberculosis.” He
was 39.
Utzschneider, evidently thinking about Fraunhofer’s work
with telescopes at the Institute, eulogized him with the words “He
brought us closer to the stars.” He might more accurately have said that
his young friend had given us an essential leg-up on the journey to
find how astoundingly far away the stars are, for von Fraunhofer had indeed found the hidden code in starlight.
Until
the beginning of the 19th century, the chemical and physical make-up of
stars had appeared to be unobtainable knowledge. However, in
mid-century, there began to be serious challenges to that assumption
when researchers such as Anders Ångström, Léon Foucault, and Sir George
Stokes recognized that a pair of the lines Fraunhofer had detected in
the sun’s spectrum were the same wavelength as a pair of lines seen in
the laboratory in the spectrum of sodium. Clearly the sun must contain
sodium.
In the late 1850s, a young pair of researchers—physicist
Gustav Kirchhoff and chemist Robert Bunsen (of the Bunsen
burner)—confirmed that the lines Fraunhofer had discovered are
signatures of different chemical elements in the sun’s atmosphere.
William Huggins in 1863 followed up on their work and on Fraunhofer’s
study of star spectra and recognized that elements present on Earth and
in the sun are also present in stars. As Huggins, wrote, “Within this
unraveled starlight exists a strange cryptography. In the hands of an
astronomer, a prism has now become more potent in revealing the unknown
than even was said to be “Agrippa’s magic glass.” By looking at the
pattern of Fraunhofer’s lines and noting where they occur within the
spectrum, it is possible to discern the chemical composition of a star.
Underlying
this picture, we now better understand that nuclear reactions in the
central region of a star generate energy, mostly in the form of photons,
that travels outward toward the exterior of the star. On the journey
through some layers of the star, highly ionized atoms that make up the
star’s fluid matter absorb and re-emit the photons. The radiation
eventually flows into interstellar space, preserving the image of the
last layer in which that activity took place, with some wavelengths of
the light now missing from that image. The missing wavelengths (in
effect, missing colors) show up as black lines in the spectrum, called
“absorption” lines. Each of the lines represents a particular element
and the strength of a line is related to the abundance of that element.
The size and shape of a line is related to the temperature, pressure,
and turbulent motion in the fluid matter of the star.
The process
of using Fraunhofer lines to help sort stars into categories began in
the 1860s when Father Angelo Secchi, in Rome at the Observatory of the
Roman College, now the Vatican Observatory, divided stars into types
based on the relative prominence and width of their spectral lines.
Until the late 18th century, researchers had thought that it might be
possible to calculate the distances to stars by comparing how bright
they appear from Earth. The idea had been based on the knowledge that
the apparent brightness of a light (how bright it appears to you)
decreases with distance in a mathematically dependable way summed up in
Isaac Newton’s inverse square law.3
If you have two identical 100-watt light bulbs and place one twice as
far from you as the other, the farther bulb will appear to be only a
fourth as bright as the nearer. Unfortunately, calculation like this
hadn’t helped for stars, for stars are not all of equal “wattage.” Their
“absolute magnitudes” (close-up or “intrinsic” brightnesses) vary
enormously. The hope remained, however, that if stars belong to
different categories, the knowledge of those categories might help us
know their absolute magnitudes.
The most dramatic role that Fraunhofer lines played was in the discovery that the universe is expanding.
The
sorting became more complicated when Edward C. Pickering and colleagues
at the Harvard College Observatory began a process in which spectra
were focused on a photographic plate. As research continued, it turned
out that the overwhelming abundance of stars can be placed in a very few
categories, suggesting that the range of compositions of stars is
rather small. In the 1920s, Cecilia Payne, in her doctoral dissertation
at Harvard, established that even in this small range of different
spectral patterns, the differences we observe are a result of the
temperatures of the stars, not because their compositions differ very
greatly. With a more sophisticated understanding of atomic structure and
the causes of the lines, stars could be meaningfully classified
according to surface temperature.
The trick in calculating the
distances to stars was to find an independent measure of their absolute
magnitudes. Today a table known as the Hertzsprung-Russell diagram
provides that. If you know a star’s spectral type (from the study of its
spectral lines), allowing for certain assumptions, you can read the
star’s absolute magnitude off the diagram. Knowing the star’s absolute
magnitude, you can calculate its distance by measuring its apparent
magnitude and using Newton’s inverse-square law.
The most
dramatic role that Fraunhofer lines played in the 20th century was in
the discovery that the universe is expanding. If a light source is
moving toward us, light waves coming from it are squashed together. The
lines in its spectrum are shifted toward the blue end (“blue-shifted”).
If the source is moving away, they are stretched out. The lines in the
spectrum are shifted toward the red end (“red-shifted”). In the late
1920s, Edwin Hubble and Milton Humason, studying such shifts, discovered
that except for galaxies clustered close to our own Milky Way galaxy,
every galaxy in the universe appears to be receding from Earth. In fact,
on the large scale, every galaxy is receding from every other. The
amount of the shift of the lines in its spectra is an indicator of the
speed at which a galaxy is approaching or receding.
The discovery
that the farther away galaxies are, the faster they are receding was
convincing evidence that the universe is expanding. As Caleb Scharf,
Director of Columbia University Astrobiology Center, puts it, “When
[Fraunhofer] first split sunlight finely enough to see its complex
spectrum he was laying the groundwork for scientists like Edwin Hubble
who split the light of distant galaxies and realized that the cosmos is a
dynamic beast.”
The lenses and telescopes von Fraunhofer
designed and built 200 years ago were equal or superior to any others
produced at the time. His inventions and innovations made them easier to
use and more effective. These practical accomplishments were not
incidental to, nor merely a distraction from, his experimental work.
They were essential to its success. Seldom have technological and
theoretical genius been so well paired, nor that pairing more essential
for the future of knowledge. He gave us a tool to measure the distances
to the stars and nebulae—a crucial rung on the ladder to modern
measurements of the size of the universe.
Kitty Ferguson is the author of nine books of popular science, including Measuring the Universe, and most recently, a biography of Stephen Hawking.
References
Aller, Lawrence H. Atoms, Stars and Nebulae Cambridge University Press, 3rd Edition (1991).
Danielson, D. The Book of the Cosmos: Imagining the Universe from Heraclitus to Hawking Perseus Publishing (2000).
Jackson, M. Spectrum of Belief: Joseph von Fraunhofer and the Craft of Precision Optics The MIT Press (2000).
Wolfgang, J. Fraunhofer in Benediktbeuern Glassworks and Workshop Burton, Van Iersel & Whitney GmbH (2008).
This is the ATHENA project logo. [Los Alamos National Laboratory]
Scientists report that they have taken a step closer to creating a
“benchtop human” on which to carry out lab and toxicology tests. Homo
minutus, as it is named, is not a real person but rather an
interconnected human organ construct.
The latest advance is the successful development and analysis of a
constructed human liver that responds to toxic chemical exposure. John
Wikswo, Ph.D., professor and director of the Vanderbilt Institute for
Integrative Biosystems Research and Education (VIIBRE) at Vanderbilt University, presented the results at this week's Society of Toxicology meeting in Phoenix.
Dr. Wikswo said the achievement is the first result from a five-year,
$19 million multi-institutional effort led by himself and Rashi Iyer,
Ph.D., senior scientist at Los Alamos National Laboratory (LANL). The
project is developing four interconnected human organ constructs—liver,
heart, lung and kidney—that are based on a miniaturized platform
nicknamed ATHENA (Advanced Tissue-Engineered Human Ectypal Network
Analyzer).
The project is supported by the Defense Threat Reduction Agency.
Similar programs to create smaller, so-called organs-on-chips are
underway at the Defense Advanced Research Projects Agency and the
National Institutes of Health.
"The original impetus for this research comes from the problems we are
having in developing new drugs," explained Dr. Wikswo. "A number of
promising new drugs that looked good in conventional cell culture and
animal trials have failed when they were tested in humans, many due to
toxic effects. That represents more than $1 billion in effort down the
drain. Our current process of testing first in cell lines on plastic and
then in mice, rats, and other animals simply isn't working."
Researchers and clinicians around the world have been working to
develop more relevant and advanced laboratory tests for drug efficacy
and toxicity: small bioreactors that can form human organ structures and are equipped with sensors to monitor organ health.
Ultimately, the goal is to connect the individual organ modules
chemically in a fashion that mimics the way the organs are connected in
the body, via a blood surrogate. The ATHENA researchers hope that this
homo minutus, with its ability to simulate the spatial and functional
complexity of human organs, will prove to be a more accurate way of
screening new drugs for potency and potential side-effects than current
methods.
Our home entertainment systems and mobile devices are all converging in more ways than one. While gadgets like the Google Chromecast aim to bring the mobile platform to TVs via an addon, Philips is taking Android and putting it right in the very heart of its 8000 series of smart TVs.
Smart TVs that run Android aren't actually that new, but Philips is
advertising the 8800 series, particularly the 8809, to be the first one
with an Ultra HD display. That's a resolution of 3840x2160 pixels, all
crammed in a large 55-inch. Those who might not want something that much
can also opt for the 8109 and 8209, both of which come with only a
1920x1080 resolution, in choices of 48 or 55 inches. You also get
Philips' Ambilight technology, which projects colors behind and around
the TV to match the display, the mood, or even your room.
Aside from the display, the highlight of these TV sets is, of course,
Android. With access to the entire gamut of apps and services from
Google Play, as well as Google Chrome browser, users will not run out of
things to do or play. The quad-core CPU on the 8109 and 8209 and the
hexa-core processor on the 8809 ensure a smooth gaming experience. Add
to that Philips' own Smart TV ecosystem and you've got the makings of
the ultimate entertainment appliance. But by themselve, the 8809 and its
smaller siblings are Smart TVs in their own right, offering features
such as gesture control, voice recognition, remote control via
smartphones or tablets, screen mirroring, recording, and even dual
channel display.
Philips has not yet revealed exact launch dates and pricing details
for these TV sets powered by Android 4.2 Jelly Bean. The manufacturer
will be initially targeting European and Russian markets by the second
quarter of 2014. US availability has not yet been announced.
In our previous Android Tutorials,
we have discussed quite a few concepts of Android development. However,
while browsing through the articles, I discovered that we have not had a
proper discussion about Android Architecture.
Because it is one of the most elementary concepts of Android development, I decided to back up a little, and take a quick walk through the Android Architecture. If you wish to revise more basic concepts of Android, you can attend this free webinar.
Android Architecture: Layers in the Android Stack
The Android stack, as the folks over at Google call it, has a number of layers, and each layer groups together several programs. In this tutorial I’ll walk you through the various layers in Android stack and the functions they are responsible for.
Following are the different layers in the Android stack:
Linux Kernel Layer
Native Layer
Application Framework Layer
Applications layer
Kernel Layer
At the bottom of the Android stack is the Linux Kernel. It
never really interacts with the users and developers, but is at the
heart of the whole system. Its importance stems from the fact that it
provides the following functions in the Android system:
Hardware Abstraction
Memory Management Programs
Security Settings
Power Management Software
Other Hardware Drivers (Drivers are programs that control hardware devices.)
Support for Shared Libraries
Network Stack
With the evolution of Android, the Linux kernels it runs on have evolved too.
Here is a Table highlighting the different Kernel versions.
The Android system uses a binder framework for its Inter-Process Communication (IPC) mechanism. The binder framework was originally developed as OpenBinder and was used for IPC in BeOS.
Native Libraries Layer
The next layer in the Android architecture includes Android’s native libraries.
Libraries carry a set of instructions to guide the device in handling
different types of data. For instance, the playback and recording of
various audio and video formats is guided by the Media Framework
Library.
Open Source Libraries:
Surface Manager: composing windows on the screen
SGL: 2D Graphics
Open GL|ES: 3D Library
Media Framework: Supports playbacks and recording of various audio, video and picture formats.
Free Type: Font Rendering
WebKit: Browser Engine
libc (System C libraries)
SQLite
Open SSL
Located on the same level as the libraries layer, the Android runtime layer includes a set of core Java libraries
as well. Android application programmers build their apps using the
Java programming language. It also includes the Dalvik Virtual Machine.
What is Dalvik VM?
Dalvik is open-source software. Dan Bornstein, who named it after the
fishing village of Dalvík in Eyjafjörður, Iceland, where some of his
ancestors lived, originally wrote Dalvic VM. It is the software responsible for running apps on Android devices.
It is a Register based Virtual Machine.
It is optimized for low memory requirements.
It has been designed to allow multiple VM instances to run at once.
Relies on the underlying OS for process isolation, memory management and threading support.
Operates on DEX files.
Application Framework Layer
Our applications directly interact with these blocks of the Android
architecture. These programs manage the basic functions of phone like
resource management, voice call management etc.
Important blocks of Application Framework:
Activity Manager: Manages the activity life cycle of applications. To understand the Activity component in Android in detail
Content Providers: Manage the data sharing between applications. Our Post on Content Provider component describes this in greater detail
Telephony Manager: Manages all voice calls. We use telephony manager if we want to access voice calls in our application.
Location Manager: Location management, using GPS or cell tower
Resource Manager: Manage the various types of resources we use in our Application
Application Layer
The applications are at the topmost layer of the Android stack. An average user of the Android device would mostly interact with this layer (for basic functions, such as making phone calls, accessing the Web browser etc.). The layers further down are accessed mostly by developers, programmers and the likes. Several standard applications come installed with every device, such as:
SMS client app
Dialer
Web browser
Contact manager
We hope you are clear with the basic Android architecture now! If not, please feel free to ask our experts! Stay tuned for more advanced tutorials of Android.
Happy Learning! (Following resources were used in creating this Android Tutorial: developer.android.com.
We come across this
term quite a few times, though may not clearly understand it. With its
popularity, there are many myths attached to “What is cloud computing?”or “What does it consist of?” or “Is it worth going for?”
To overcome your peculiarities with cloud computing, we have come up
with this blog post to make the entire idea behind cloud computing clear
to you! According to Wikipedia,“Cloud computing is the use of
computing resources (hardware and software) that are delivered as a
service over a network (typically the Internet).”
To make it simple for you, Cloud computing is an
internet based computing where various services such as storage,
applications, servers, etc are delivered through internet. With the
technique of cloud computing you can now store, access and process data
and applications over the internet instead of your system’s hard drive.
Now we know ‘what is Cloud Computing’, we also need to know why cloud computing is known as “Cloud computing?” What relation does “cloud” have with a technology that offers remote services? Actually nothing! The name cloud is derived from the cloud shape that is universally used to depict internet in graphics. Cloud computing has 3 Service Models and 4 Deployment Models which are explained below!
Also known as Hardware as a Service (HaaS),
Infrastructure as a Service (IaaS) is a category of cloud computing in
which an organization outsources the equipment used to support
operations, including storage, servers hardware and networking
components. The deal is like this – The service provider is the owner of
the equipment and is responsible for configuring , running and
maintaining it. The client, on the other hand, pays on a per-use basis.
IaaS offers a standardized, dynamic, flexible and sometimes
virtualized environment for the end users. Characteristics of IaaS include:
Virtualization of Desktop
Internet availability
Use of billing model
Computerized administrative tasks
Utility computing service
Policy-based services
Active scaling
Some of the prominent industry names offering IaaS are Amazon Web Services and AT&T.
2. Platform-as-a-Service (PaaS):
Platform as a Service (PaaS) is another service model of cloud
computing that provides application execution services like application
runtime, storage, and integration. PaaS follows a resourceful and
responsive approach to operate scale-out applications and make these
applications profitable. In this model the provider provides the
servers, networks, storage and other services. On the other hand, the
consumer controls software deployment and configuration settings. Characteristics of PaaS include:
Facilitation of hosting capabilities
Designing and developing the application
Integrating web services and databases
Providing security, scalability and storage
Versioning the application and application instrumentation
Testing and deployment facilities
Some of the prominent industry names offering PaaS are GoogleAppEngine and OpenStack.
3. Software-as-a-Service (SaaS):
As a cloud computing service model, Software as a Service (SaaS)
provides business processes and applications, including CRM, e-mails,
collaboration, and so on. SaaS helps in optimizing the cost and delivery
in exchange of negligible customization and represents a shift of
operational risks from the consumer to the provider. All infrastructure
and IT operational functions are abstracted away from the consumer. SaaS
is sometimes referred to as “on-demand software” and is usually priced
on a pay-per-use basis. SaaS providers price applications using a
subscription fee. Characteristics of SaaS include:
The application is hosted centrally.
Outsourcing hardware and software support to the cloud provider.
Enhancing the potential of an organization to reduce its IT operational costs.
No need to install new software to release updates. Infact, any
update can be executed by the cloud provider itself not the customers.
Software testing takes place at a faster rate as Saas applications have only one configuration.
Easy recognition of areas that need improvement as the solution
provider has access to user behavior within the application itself.
Some of the prominent industry names offering SaaS are Salesforce and Microsoft Office 365. To know more about “What is Cloud Computing”, the table below
is showing a comparison among the 3 Cloud Computing Service Models:
4 primary Cloud Computing Deployment models:
The Private Cloud
The Public Cloud
The Hybrid Cloud
The Community Cloud
1.The Private Cloud
In the private cloud, hosting is built and maintained for a specific
client. The infrastructure required for hosting can either be
on-premises or at a third-party location.Though private cloud is not a
good option to optimize cost, however, it is a boon for two reasons:
1. It is great deployment model from security point of view! When
organizations start using cloud computing, they face several challenges
including data security. The private cloud takes care of this through
secure-access VPN or by the physical location within the client’s
firewall system. Thus, this model is best suited for mission-critical
applications. There are many organizations that use virtual private
cloud such as Amazon.
2. Secondly, private cloud is implemented by organizations where
there is a strict requirement that data should obey the rules of various
regulatory standards such as HIPAA, SOX, or SAS 70. Such standards make
sure that the data is audited according to the protocols set. Thus,
Private cloud models are well suited in healthcare and pharmaceutical
industries.
2.The Public Cloud
As opposed to the Private cloud, in the Public cloud deployment
model, services and infrastructure are offered to several clients free
of charge or on the basis of a pay-per-user license policy. Even Google
adopts public cloud model. This is a true cloud hosting which provides
cost benefits by reducing IT operational costs substantially. This model
is widely used in organizations that require to handle host SaaS
applications, load spikes, utilize interim infrastructure for developing
and testing and take care of applications which are used by several
consumers to avoid heavy infrastructure investment.
3.The Hybrid Cloud
But what if organizations look for both data security and cost
benefits? We also have the Hybrid cloud deployment model! This
deployment model enables organizations to secure their data and
applications on a private cloud and cut down on IT operational costs by
storing the shared information on the public cloud.Another advantage of
hybrid cloud is that this model comes into rescue when the present
private cloud infrastructure unsuccessful in managing load spikes and
requires back-up to support the load. Hence, using the hybrid cloud, the
organizations can transfer workloads between public and private cloud
hosting without any trouble to the consumers. Some examples of hybrid
cloud are Force.com and Microsoft Azure.
4.The Community Cloud
This is another cloud deployment model, where the cloud
infrastructure is shared by many organizations with the same policy and
compliance considerations. Because this model is shared by a bigger
group, this further enables in decreasing the IT operational costs in
contrast to private cloud.
This cloud model is best suited for state-level government
departments that need access to the same data and applications relating
to the local population, roads, electrical stations, hospitals.
Now let’s look into some of the other technologies associated with cloud computing:
Big data and Cloud Computing:
Big data is nothing but an assortment of such a huge and complex data
that it becomes very tedious to capture, store, process, retrieve and
analyze it with the help of on-hand database management tools or
traditional data processing techniques. As Big Data is getting Bigger
day by day, a synchronization of big data and cloud computing is
inevitable. Infact, it is a perfect match! Web is fast replacing desktop
applications, thus, there arises a need of cloud computing stepping up
into the big data arena and providing unlimited resources when needed.
Hadoop and Cloud computing:
Hadoop is an open source software framework that
supports data-intensive distributed applications and is considered a
panacea for managing big data. Though originally Hadoop started
supporting the large data driven companies like Facebook and LinkedIn,
nowadays Hadoop has become more enterprise-driven and can be used in
different industries at par! Though Hadoop works best
on Windows and Linux, it can also work on other operating systems like
BSD and OS X. Thus, Hadoop and Cloud computing are in great demand in
several organizations. In no time, Hadoop will become one of the most
required Apps for Cloud Computing. This is evident from the number of
Hadoop clusters offered by cloud vendors in various businesses. Thus,
Hadoop will reside in the cloud soon! This further leads to an acute need for huge number of Hadoop professionals who can help big organizations manage Big Data!
Why Cloud Computing is a boon for professionals today?
A great news for all aspiring IT professionals! In the world, where
organizations are dealing with Big data every moment, Cloud Computing is
a boon for them! Thus, today organizations and businesses are ready to
invest in Cloud Computing Models because of their amazing results. Cloud
computing is one of today’s hottest IT trends! In fact, all over the
world, there is a severe shortage of cloud computing professionals.
This,in turn means a great opportunity for those who have or are
acquiring skill sets in cloud computing. For example, Oracle has a
widespread set of cloud computing solutions. However,such intricate
systems require very highly-skilled IT professionals to effectively
develop, implement, administer and maintain them.Being an IT
professional, do consider Cloud computing! You could be a software
engineer, or a system engineer, or even a network administrator. There
are numerous career opportunities in cloud computing!