Web Browsing Basics for Beginners
Web Browsing Basics for Beginners
Web browser:
A web browser (commonly referred to as a browser) is a software application for accessing
information on the World Wide Web. When a user requests a particular website, the web
browser retrieves the necessary content from a web server and then displays the resulting web
page on the user's device.
A web browser is not the same thing as a search engine, though the two are often
confused.[1][2] For a user, a search engine is just a website, such as Google Search, Bing, or
DuckDuckGo, that stores searchable data about other websites. However, to connect to a
website's server and display its web pages, a user must have a web browser installed.[3]
Web browsers are used on a range of devices, including desktops, laptops, tablets, and
smartphones. In 2019, an estimated 4.3 billion people used a browser.[4] The most used
browser is Google Chrome, with a 64% global market share on all devices, followed by
Safari with 17.
Web Browser
A web browser, or simply "browser," is an application used to access and view websites.
Common web browsers include Microsoft Internet Explorer, Google Chrome, Mozilla
Firefox, and Apple Safari.
The primary function of a web browser is to render HTML, the code used to design or "mark
up" webpages. Each time a browser loads a web page, it processes the HTML, which may
include text, links, and references to images and other items, such as cascading style sheets
and JavaScript functions. The browser processes these items, then renders them in the
browser window.
Early web browsers, such as Mosaic and Netscape Navigator, were simple applications that
rendered HTML, processed form input, and supported bookmarks. As websites have evolved,
so have web browser requirements. Today's browsers are far more advanced, supporting
multiple types of HTML (such as XHTML and HTML 5), dynamic JavaScript, and
encryption used by secure websites.
The capabilities of modern web browsers allow web developers to create highly interactive
websites. For example, Ajax enables a browser to dynamically update information on a
webpage without the need to reload the page. Advances in CSS allow browsers to display a
responsive website layouts and a wide array of visual effects. Cookies allow browsers to
remember your settings for specific websites.
While web browser technology has come a long way since Netscape, browser compatibility
issues remain a problem. Since browsers use different rendering engines, websites may not
appear the same across multiple browsers. In some cases, a website may work fine in one
browser, but not function properly in another. Therefore, it is smart to install multiple
browsers on your computer so you can use an alternate browser if necessary.
• Lycos – http://www.lycos.com
• Yahoo! – http://www.yahoo.com
Contents
● 1 Window to Web Browsing
● 2 Web Browser
Web Browser
A web browser is an interface that helps a computer user gain access to all the content that is
on the Internet and the hard disk of the computer. It can view images, text documents, audio
and video files, games, etc. More than one web browser can also be installed on a single
computer. The user can navigate through files, folders and websites with the help of a
browser. When the browser is used for browsing web pages, the pages may contain certain
links which can be opened in a new browser. Multiple tabs and windows of the same browser
can also be opened. There are four leading web browsers: Explorer, FireFox, Netscape and
Safari but there are many others browsers available.
● Netscape
Netscape is one of the original Web browsers. This is what Microsoft designed Internet
Explorer to compete against. Netscape and IE comprise the major portion of the browser
market. Netscape was introduced in 1994.
● Internet Explorer
Internet Explorer (IE) is a product from software giant Microsoft. This is the most commonly
used browser in the universe. This was introduced in 1995 along with Windows 95 launch
and it has passed Netscape popularity in 1998.
● Safari
Safari is a web browser developed by Apple Inc. and included in Mac OS X. It was first
released as a public beta in January 2003. Safari has very good support for latest technologies
like XHTML, CSS2 etc.
● Firefox
Firefox is a new browser derived from Mozilla. It was released in 2004 and has grown to be
the second most popular browser on the Internet.
● Opera
Opera is smaller and faster than most other browsers, yet it is full- featured. Fast,
user-friendly, with keyboard interface, multiple windows, zoom functions, and more. Java
and non Java-enabled versions available. Ideal for newcomers to the Internet, school children,
handicap and as a front-end for CD- Rom and kiosks.
● Google Chrome
This web browser was developed by Google. Its beta and commercial versions were released
in September 2008 for Microsoft Windows. It has soon become the fourth-most widely used
web browser with a market share of 1.23%. The browser versions for Mac OS X are under
development. The browser options are very similar to that of Safari, the settings locations are
similar to Internet Explorer 7, and the window design is based on Windows Vista.
❖ bookmarks
How to use the Bookmarks bar
If you're using Chrome on a computer, you can have your bookmark appear in a bar at the top
of every webpage. You can also add, remove, or reorder items in the bookmarks bar at any
time.
Show or hide the bookmarks bar
To turn the bookmarks bar on or off, follow these steps:
2. Once your Chrome browser has opened navigate to the top right, click More .
Troubleshoot
Here are a few common questions about the bookmarks bar. Click them for troubleshooting steps.
By default, the bookmarks bar shows the Apps icon . It's a shortcut that leads you to the apps
you've installed in Chrome. You can remove it:
❖ cookies
What cookies are:
Cookies are files created by websites you visit. They make your online experience easier by
saving browsing information. With cookies, sites can keep you signed in, remember your site
preferences, and give you locally relevant content.
There are two types of cookies:
● First-party cookies are created by the site you visit. The site is shown in the address bar.
● Third-party cookies are created by other sites. These sites own some of the content, like ads
or images, that you see on the webpage you visit.
●
Many websites use small strings of text known as cookies to store persistent client-side state
between connections. Cookies are passed from server to client and back again in the HTTP
headers of requests and responses. Cookies can be used by a server to indicate session IDs,
shopping cart contents, login credentials, user preferences, and more.
Attributes of Cookie :
● Name = value pair: This depicts the actual information stored within the cookie.
Neither the name nor the value should contain white space or any of the following
characters: [ ] ( ) = , ” / ? @ : ;
Example of valid cookie name-value pair:
● Path: When requesting a document in the subtree from the same server, the client
echoes that cookie back. However, it does not use the cookie in other directories on
the site.
● Expires : The browser should remove the cookie from its cache after that date has
passed.
● Max-Age : This attribute sets the cookie to expire after a certain number of seconds
have passed instead of at a specific moment. For instance, this cookie expires one
hour (3,600 seconds) after it’s first set.
Methods :
1. Set Domain() : Sets the domain in which this cookie is visible. Domains are
explained in detail in the attributes of cookie part previously.
2. Syntax : public void set Domain(String pattern)
3. Parameters :
pattern : string representing the domain in which this cookie is visible.
9. Set Max Age() : Specifies the time (in seconds) elapsed before this cookie expires.
10.Syntax : public void set Max Age(long time)
11.Parameters :
time : time in seconds before this cookie expires
12. Get Max Age() : Returns the max age component of this cookie.
13. Set Path() : Specifies a path for the cookie to which the client should return the
cookie.
14.Syntax : public void set Path(String path)
15.Parameters :
path : path where this cookie is returned
17. Set Secure() : Indicated if secure protocol to be used while sending this cookie.
Default value is false.
18.Syntax : public void set Secure(Boolean secure)
19.Parameters:
20.secure - If true, the cookie can only be sent over a secure
21.protocol like https.
If false, it can be sent over any protocol.
22. Get Secure() : Returns true if this cookie must be
sent by a secure protocol, otherwise false.
28. Get Version() : Returns 0 if the cookie complies with the original Netscape
specification; 1 if the cookie complies with RFC 2965/2109
29. Set Version() : Used to set the version of the cookie protocol this cookie uses.
30.Syntax :public void set Version(int v)
31.Parameters :
32.v - 0 for original Netscape specification; 1 for RFC 2965/2109
33. clone() : returns a copy of this cookie.
Below is a Java implementation of a simple servlet program which stores a cookie in the
browser when user first requests for it and then for further requests it displays the cookies
stored.
method explanation
Is Indeterminate() Gets the value of the property indeterminate.
Get Progress() Gets the value of the property progress.
Set Progress(double v) Sets the value of the property progress
Below program illustrate the use of Progress Indicator:
Program to create Progress indicator: This program creates a progress indicator indicated
by the name p b. The progress indicator will be created inside a scene, which in turn will be
hosted inside a stage. The function set Title() is used to provide title to the stage. Then a tile
pane is created, on which add Children() method is called to attach the progress indicator and
the button inside the scene. Finally the show() method is called to display the final results.
❖ Customize browser
5 Ways to Customize Your Browser
You’re probably familiar with browser extensions, but there are many other ways to
customize your browser and tweak websites. The web isn’t a one-way, passive medium – you
have the ability to remix websites you view on the fly, adding features or changing their look.
Advertisement
Each of these methods has its own advantages and drawbacks. Bookmarklets are ideal for
small buttons you click occasionally, while user scripts and user styles are easy ways to
modify the websites you view, adding features or changing their look every time they load.
Extensions
Extensions, also known as add-ons, are the most powerful things you can install in your
browser. Extensions can do almost anything, from adding new features to your browser’s
interface to modifying every webpage that loads.
As extensions are basically additional programs that run inside your web browser, they take
up additional system resources. Using many extensions can slow down your browser.
Extensions and add-ons are easy to find. We’ve got a lists of the best Firefox add-ons and
best Chrome extensions, or you can just browse the Firefox add-ons or Chrome extensions
sites.
The Best Firefox Addons Firefox is famous for its extensions. But which addons are the most useful?
Here are the ones we think are best, what they do, and where you can find them. Read More
Bookmarklets
Book mark lets are small bits of JavaScript (the code that runs on webpages) that are stored
as a bookmark. When you click the bookmark, the JavaScript code in the bookmark let runs
on the current webpage. Erez recently extolled their advantages – they don’t always run in the
background, so replacing extensions with bookmark lets frees up system resources.
Some examples of bookmark lets include Share buttons, which share the current page on
social networking sites when clicked, or a password revealer, which runs JavaScript on the
page to reveal a password that appears as ***** characters.
Bookmark lets can’t replace all extensions, though. Bookmark lets only run when you click
them, so they can’t automatically do something to every webpage you load. They also can’t
add user interface elements, such as toolbar buttons, to your browser.
To find bookmark lets, check out the Mark lets search engine, which we’ve covered in the
directory.
User Scripts
If you’ve heard of the popular Grease monkey extension, you’ve heard of user scripts. Think
of user scripts like bookmark lets that always run when certain webpages load – a user script
can run on every webpage or only specific websites. User scripts straddle the line between
bookmark lets and extensions – they’re just JavaScript code that runs on the current page, but
they run automatically.
To use user scripts in Firefox, you’ll need Grease monkey installed. Chrome users can install
user scripts as if they were extensions – Chrome converts the user script into an extension
when you install it. You can also try the Tamper monkey extension for Chrome, which is a
Grease monkey-style user script manager that adds additional features scripts may require.
Check out UserScripts.org to browse for and install user scripts. You can also try the
Greasefire extension for Firefox, which shows you user scripts that work with the websites
you visit.
We’ve covered lots of things you can do with Grease monkey in the past.
User Styles
User styles are like themes for websites. A user style – usually associated with the Stylish
browser extension – is like a user script, but it contains CSS style sheet code instead of
JavaScript code. User styles can add additional CSS rules to a page, changing that page’s
design – for example, you can install a user style that replaces the new Gmail look with the
old Gmail look. Unlike user scripts, user styles are focused on customizing the look or layout
of a page.
To use user styles, you’ll need Stylish for Firefox or Stylish for Chrome. After installing the
extension, check out UserStyles.org to download user styles.
Because of the way Firefox works, user styles can actually customize and tweak parts of
Firefox’s interface, too.
Check out our guide to Stylish for information on creating your own user styles.
How to Fix Small Annoyances on the Web With Stylish [Firefox & Chrome] Web designers have an
almost impossible job. They need to come up with one design that pleases everyone. When talking
about a service like Gmail, used by countless millions of people all over the world,... Read More
Themes
Themes are an obvious way to customize your browser, but we can’t leave them out. They
don’t add new features or modify webpages, but they do put a new look on your browser’s
interface. Major browsers like Chrome and Firefox both support themes, which you can find
on the Chrome themes and Firefox themes websites.
❖ Browser Tricks :to Help You Use the
Internet Like a Pro
Internet browsers are something that most of us use every single day; they provide a portal to
all the useful, weird, and wonderful stuff that the web has to offer.
Given the frequency with which we interact with them, it might be tempting to think you’re a
browser-using master. In truth, very few of us are. There are always more tips, more tricks,
and more ways that you can improve your skills.
Here we take a look at ten cool browser tricks that’ll help you use the Internet like pro.
1. Restore a Tab:
We’ve all closed a tab by accident. It’s annoying, especially if you’d gone down an
Internet-sized rabbit hole and weren’t sure what site you were even on. In the past, you’d
have to navigate to your browser’s history and reload it from there, though most browsers at
least now offer a “Recently Closed Tabs” list.
Did you know there is an even faster way? Just press Ctrl + Shift + T and the tab will
magically reappear. You can use the shortcut multiple times to open a succession of your
closed tabs.
2. Clear the Cache:
“Cache” can refer to many things in computing, but in Internet terms it applies to the
temporary storage of web pages and images; it helps to reduce bandwidth usage, server load,
and lag. Sites can be loaded from the cache as long as certain conditions are met.
7 Hidden Windows Caches & How to Clear Them Cached files can take up a lot of bytes. While
dedicated tools can help you free up disk space, they might not clear it all. We show you how to
manually release storage space. Read More
Sometimes, a cache can get corrupted. If this is the case, you can easily delete the cache and
reload the proper version of the page by hitting Ctrl + Shift + R.
If this happens to you, try pasting the link into archive.org. They often have old cached
versions of sites available.
On Chrome and Firefox you can also use Ctrl + 1-8 to jump straight to a tab, with the
number used corresponding to the order of tabs on the top of your screen.
7. Reddit Slideshow:
Reddit is a hugely popular online bulletin board, with subredits dedicated to almost every
topic imaginable.
If you’re browsing a picture-based subreddit such as Earth Porn, add a P after the word
Reddit in the URL, turning the address from this www.reddit.com/r/EarthPorn into this
www.redditp.com/r/EarthPorn.
All the pictures in the subreddit will be shown in slideshow format. It even has settings that
will allow you tweak the speed at which the pictures change.
As one of the possible set of component technologies of Next Generation Web, there are
five major component technologies such as Ubiquitous Web, Mobile Web, Web 2.0 (Lim,
Park, 2009; Jeon, Lee, 2006, 2007) Web Platform (TTA, 2009), and Web Accessibility.
Ubiquitous Web Technologies make possible different types of devices including desktops,
office automation devices, home appliances, mobile phones, ubiquitous devices such as
sensors and effectors to communicate each other seamlessly via the Web. Mobile Web
Technologies make it possible for diverse types of mobile devices including cell phones,
Personal Digital Assistants (PDAs) to exchange Universal Resource Indicator (URI) based
resources via Hyper Text Transfer Protocol (HTTP) and use markup languages such as
Extended Markup Language (XML). Web 2.0 technologies make it possible to use a more
distributed and open Web as a Platform by enhancing the capabilities of existing Web
applications and service environments. Web as a Platform Technologies make it possible for
users to link and execute local or remote applications, services, and data by using the
various currently available standardized Web technologies. Web Accessibility Technologies,
including cursor-based browsing, adaptive zoom, Accessible Rich Internet Application
(ARIA) markup support make it possible the Web users primarily for disabled uses, but for all
user agents including highly limited devices such as mobile phones to access Web content.
A series of Web content access guidelines were published by W3C as the document WCAG
(Web Content Accessibility Guidelines)6. The Web is also migrating toward Social Web
which is used for people socialize via WWW. People are brought together by people oriented
Websites such as Facebook and My Space or by common hobby oriented Websites such as
Flickr and Kodak Gallery. There are many Web based Collaboration Tools available in the
market.7 Although there are paid or subscription services such as Basecamp and Zimbra,
there are many alternatives providing free and similar, if not better, features including MS
Live docs, Google Docs and bubbl.us. The functions of those tools are very diverse ranging
from basic brainstorming or white boarding to fully-featured project management application.
The following list of standard organization organizations (SDOs) have been working on the
emerging technologies of Next Generation Web.
❖ Search Engines
Introduction:
Search Engine refers to a huge database of internet resources such as web pages,
newsgroups, programs, images etc. It helps to locate information on World Wide Web.
User can search for any information by passing query in form of keywords or phrase. It then
searches for relevant information in its database and return to the user.
2. Database
3. Search Interfaces
Web crawler
It is also known as spider or bots. It is a software component that traverses the web to gather
information.
Database
All the information on the web is stored in database. It consists of huge web resources.
Search Interfaces
This component is an interface between user and the database. It helps the user to search
through the database.
Search Engine Working: Web crawler, database and the search interface are the major
component of a search engine that actually makes search engine to work. Search engines
make use of Boolean expression AND, OR, NOT to restrict and widen the results of a search.
Following are the steps that are performed by the search engine:
● The search engine looks for the keyword in the index for predefined database instead
of going directly to the web to search for the keyword.
● It then uses software to search for the information in the database. This software
component is known as web crawler.
● Once web crawler finds the pages, the search engine then shows the relevant web
pages as a result. These retrieved web pages generally include title of page, size of
text portion, first several sentences etc.
These search criteria may vary from one search engine to the other. The retrieved information is
ranked according to various factors such as frequency of keywords, relevancy of information, links
etc.
Architecture
The search engine architecture comprises of the three basic layers listed below:
● Content collection and refinement.
● Search core
● User and application interfaces
Why Search Engines are Important:
Search engines are part of daily life for two types of people.
User do more than billions of searches only on Google to find relevant information. This
opens out a huge scope for businesses and online content publishers to attract people to their
website for free. Search engines follow guidelines and have their own algorithm to decide the
ranking of websites in search results. Optimizing websites for Google and other search
engines is an essential part of any website owner for reaching out the large audience. The
visitors can generate revenue for site owners either through advertisements displayed on the
site or though purchasing products.
Let us discuss all types of search engines in detail in the following sections.
1. Crawler Based Search Engines
All crawler based search engines use a crawler or bot or spider for crawling and indexing new
content to the search database. There are four basic steps, every crawler based search engines
follow before displaying any sites in the search results.
● Crawling
● Indexing
● Calculating Relevancy
● Retrieving the Result
rawling
1.1. C
Search engines crawl the whole web to fetch the web pages available. A piece of software
called crawler or bot or spider, performs the crawling of the entire web. The crawling
frequency depends on the search engine and it may take few days between crawls. This is the
reason sometimes you can see your old or deleted page content is showing in the search
results. The search results will show the new updated content, once the search engines crawl
your site again.
1.2. Indexing
Indexing is next step after crawling which is a process of identifying the words and
expressions that best describe the page. The identified words are referred as keywords and the
page is assigned to the identified keywords. Sometimes when the crawler does not understand
the meaning of your page, your site may rank lower on the search results. Here you need to
optimize your pages for search engine crawlers to make sure the content is easily
understandable. Once the crawlers pickup correct keywords your page will be assigned to
those keywords and rank high on search results.
1.3. Calculating Relevancy
Search engine compares the search string in the search request with the indexed pages
from the database. Since it is likely that more than one page contains the search string, search
engine starts calculating the relevancy of each of the pages in its index with the search
string.
There are various algorithms to calculate relevancy. Each of these algorithms has different
relative weights for common factors like keyword density, links, or meta tags. That is why
different search engines give different search results pages for the same search string. It is a
known fact that all major search engines periodically change their algorithms. If you want to
keep your site at the top, you also need to adapt your pages to the latest changes. This is one
reason to devote permanent efforts to SEO, if you like to be at the top.
1.4. Retrieving Results
The last step in search engines’ activity is retrieving the results. Basically, it is simply
displaying them in the browser in an order. Search engines sort the endless pages of search
results in the order of most relevant to the least relevant sites.
Examples of Crawler Based Search Engines
Most of the popular search engines are crawler based search engines and use the above
technology to display search results. Example of crawler based search engines:
● Google
● Bing
● Yahoo!
● Baidu
● Yandex
Besides these popular search engines there are many other crawler based search engines
available like DuckDuckGo, AOL and Ask.
2. Human Powered Directories
Human powered directories also referred as open directory system depends on human based
activities for listings. Below is how the indexing in human powered directories work:
● Site owner submits a short description of the site to the directory along with category it is to
be listed.
● Submitted site is then manually reviewed and added in the appropriate category or rejected
for listing.
● Keywords entered in a search box will be matched with the description of the sites. This
means the changes made to the content of a web pages are not taken into consideration as
it is only the description that matters.
● A good site with good content is more likely to be reviewed for free compared to a site with
poor content.
Yahoo! Directory and DMOZ were perfect examples of human powered directories.
Unfortunately, automated search engines like Google, wiped out all those human powered
directory style search engines out of the web.
3. Hybrid Search Engines
Hybrid Search Engines use both crawler based and manual indexing for listing the sites in
search results. Most of the crawler based search engines like Google basically uses crawlers
as a primary mechanism and human powered directories as secondary mechanism. For
example, Google may take the description of a webpage from human powered directories and
show in the search results. As human powered directories are disappearing, hybrid types are
becoming more and more crawler based search engines.
But still there are manual filtering of search result happens to remove the copied and spammy
sites. When a site is being identified for spammy activities, the website owner needs to take
corrective action and resubmit the site to search engines. The experts do manual review of the
submitted site before including it again in the search results. In this manner though the
crawlers control the processes, the control is manual to monitor and show the search results
naturally.
4. Other Types of Search Engines
Besides the above three major types, search engines can be classified into many other
categories depending upon the usage. Below are some of the examples:
● Search engines have different types of bots for exclusively displaying images, videos, news,
products and local listings. For example, Google News page can be used to search only news
from different newspapers.
● Some of the search engines like Dogpile collects meta information of the pages from
other search engines and directories to display in the search results. This type of search
engines are called metasearch engines.
● Semantic search engines like Swoogle provide accurate search results on specific area by
understanding the contextual meaning of the search queries.
❖ HTTP(HYPERTEXT TRANSFER
PROTOCOL):
The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed,
collaborative, hypermedia information systems. This is the foundation for data
communication for the World Wide Web (i.e. internet) since 1990. HTTP is a generic and
stateless protocol which can be used for other purposes as well using extensions of its request
methods, error codes, and headers.
Basically, HTTP is a TCP/IP based communication protocol, that is used to deliver data
(HTML files, image files, query results, etc.) on the World Wide Web. The default port is
TCP 80, but other ports can be used as well. It provides a standardized way for computers to
communicate with each other. HTTP specification specifies how clients' request data will be
constructed and sent to the server, and how the servers respond to these requests.
Basic Features:
There are three basic features that make HTTP a simple but powerful protocol:
● HTTP is connectionless: The HTTP client, i.e., a browser initiates an HTTP request
and after a request is made, the client waits for the response. The server processes the
request and sends a response back after which client disconnect the connection. So
client and server knows about each other during current request and response only.
Further requests are made on new connection like client and server are new to each
other.
● HTTP is media independent: It means, any type of data can be sent by HTTP as
long as both the client and the server know how to handle the data content. It is
required for the client as well as the server to specify the content type using
appropriate MIME-type.
● HTTP is stateless: As mentioned above, HTTP is connectionless and it is a direct
result of HTTP being a stateless protocol. The server and client are aware of each
other only during a current request. Afterwards, both of them forget about each other.
Due to this nature of the protocol, neither the client nor the browser can retain
information between different requests across the web pages.
HTTP/1.0 uses a new connection for each request/response exchange, where as HTTP/1.1
connection may be used for one or more request/response exchanges.
Basic Architecture:
The following diagram shows a very basic architecture of a web application and depicts
where HTTP sits:
Client
The HTTP client sends a request to the server in the form of a request method, URI, and
protocol version, followed by a MIME-like message containing request modifiers, client
information, and possible body content over a TCP/IP connection.
Server
The HTTP server responds with a status line, including the message's protocol version and a
success or error code, followed by a MIME-like message containing server information,
entity meta information, and possible entity-body content.
The set of common methods for HTTP/1.1 is defined below and this set can be expanded
based on requirements. These method names are case sensitive and they must be used in
uppercase.
HTTP METHOD :
S.N. Method and Description
GET
1 The GET method is used to retrieve information from the given server using a given URI.
Requests using GET should only retrieve data and should have no other effect on the
data.
HEAD
2
Same as GET, but transfers the status line and header section only.
POST
3 A POST request is used to send data to the server, for example, customer information,
file upload, etc. using HTML forms.
PUT
4
Replaces all current representations of the target resource with the uploaded content.
DELETE
5
Removes all current representations of the target resource given by a URI.
CONNECT
6
Establishes a tunnel to the server identified by a given URI.
OPTIONS
7
Describes the communication options for the target resource.
TRACE
8
Performs a message loop-back test along the path to the target resource.
GET Method
A GET request retrieves data from a web server by specifying parameters in the URL portion
of the request. This is the main method used for document retrieval. The following example
makes use of GET method to fetch hello.htm:
GET /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
The server response against the above GET request will be as follows:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT
ETag: "34aa387-d-1568eb00"
Vary: Authorization,Accept
Accept-Ranges: bytes
Content-Length: 88
Content-Type: text/html
Connection: Closed
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>
HEAD Method
The HEAD method is functionally similar to GET, except that the server replies with a
response line and headers, but no entity-body. The following example makes use of HEAD
method to fetch header information about hello.htm:
HEAD /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
The server response against the above GET request will be as follows:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT
ETag: "34aa387-d-1568eb00"
Vary: Authorization,Accept
Accept-Ranges: bytes
Content-Length: 88
Content-Type: text/html
Connection: Closed
You can notice that here server the does not send any data after header.
POST Method
The POST method is used when you want to send some data to the server, for example, file
update, form data, etc. The following example makes use of POST method to send a form
data to the server, which will be processed by a process.cgi and finally a response will be
returned:
POST /cgi-bin/process.cgi HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Content-Type: text/xml; charset=utf-8
Content-Length: 88
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
<?xml version="1.0" encoding="utf-8"?>
<string xmlns="http://clearforest.com/">string</string>
The server side script process.cgi processes the passed data and sends the following
response:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT
ETag: "34aa387-d-1568eb00"
Vary: Authorization,Accept
Accept-Ranges: bytes
Content-Length: 88
Content-Type: text/html
Connection: Closed
<html>
<body>
<h1>Request Processed Successfully</h1>
</body></HTML>
PUT Method:
The PUT method is used to request the server to store the included entity-body at a location
specified by the given URL. The following example requests the server to save the given
entity-body in hello.htm at the root of the server:
PUT /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/html
Content-Length: 182
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>
The server will store the given entity-body in hello.htm file and will send the following
response back to the client:
HTTP/1.1 201 Created
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Content-type: text/html
Content-length: 30
Connection: Closed
<html>
<body>
<h1>The file was created.</h1>
</body>
</html>
DELETE Method
The DELETE method is used to request the server to delete a file at a location specified by
the given URL. The following example requests the server to delete the given file hello.htm
at the root of the server:
DELETE /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Connection: Keep-Alive
The server will delete the mentioned file hello.htm and will send the following response
back to the client:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Content-type: text/html
Content-length: 30
Connection: Closed
<html>
<body>
<h1>URL deleted.</h1>
</body>
</html>
CONNECT Method
The CONNECT method is used by the client to establish a network connection to a web
server over HTTP. The following example requests a connection with a web server running
on the host tutorialspoint.com:
CONNECT www.tutorialspoint.com HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
The connection is established with the server and the following response is sent back to
the client:
HTTP/1.1 200 Connection established
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
OPTIONS Method
The OPTIONS method is used by the client to find out the HTTP methods and other options
supported by a web server. The client can specify a URL for the OPTIONS method, or an
asterisk (*) to refer to the entire server. The following example requests a list of methods
supported by a web server running on tutorialspoint.com:
OPTIONS * HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
The server will send an information based on the current configuration of the server,
for example:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Allow: GET,HEAD,POST,OPTIONS,TRACE
Content-Type: httpd/unix-directory
TRACE Method
The TRACE method is used to echo the contents of an HTTP Request back to the requester
which can be used for debugging purpose at the time of development. The following example
shows the usage of TRACE method:
TRACE / HTTP/1.1
Host: www.tutorialspoint.com
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
The server will send the following message in response to the above request:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Connection: close
Content-Type: message/http
Content-Length: 39
TRACE / HTTP/1.1
Host: www.tutorialspoint.com
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
HTTP HEADER:
HTTP header fields provide required information about the request or response, or about the
object sent in the message body. There are four types of HTTP message headers:
● General-header: These header fields have general applicability for both request and
response messages.
● Client Request-header: These header fields have applicability only for request
messages.
● Server Response-header: These header fields have applicability only for response
messages.
● Entity-header: These header fields define meta information about the entity-body or,
if no body is present, about the resource identified by the request.
❖ URL:
Uniform Resource Locator (URL):
Parts of a URL:
Using the URL https://whatis.techtarget.com/search/query?q=URL as an example,
components of a URL can include:
● The protocol or scheme. Used to access a resource on the internet. Protocols include http,
https, ftps, mailto and file. The resource is reached through the domain name system (DNS)
name. In this example, the protocol is https.
● Host name or domain name. The unique reference the represents a webpage. For this
example, whatis.techtarget.com.
● Port name. Usually not visible in URLs, but necessary. Always following a colon, port 80 is
the default port for web servers, but there are other options. For example, :port80.
● Path. A path refers to a file or location on the web server. For this example, search/query.
● Query. Found in the URL of dynamic pages. The query consists of a question mark, followed
by parameters. For this example, ?.
● Parameters. Pieces of information in a query string of a URL. Multiple parameters can be
separated by ampersands (&). For this example, q=URL.
● Fragment. This is an internal page reference, which refers to a section within the webpage. It
appears at the end of a URL and begins with a hashtag (#). Although not in the example
above, an example could be #history in the URL
https://en.wikipedia.org/wiki/Internet#History.
Other examples of parts of a URL can include:
Internet tools:
Online Chatting:
Chat is a text-based communication that is live or in real-time. For example, when talking to
someone in chat any typed text is received by other participants immediately. In contrast,
other text-based communications such as e-mail are modes of correspondence that are not
real-time.
There are also several million users chatting through other networks such as IRC. A good
example of a chat on IRC is the Computer Hope chat.
Online chatting :is a text-based communication between two or more people over the
network. In this, the text message is delivered in real time and people get immediate
response.
Talkomatic was the world first online chat system. It was developed by Doug Brown and David R.
Woolley in 1973.
Chat Etiquette:
Chat etiquette defines rules that are supposed to be followed while online chatting:
● Avoid chat slang
● Try to spell all words correctly.
● Don’t write all the words in capital.
● Don’t send other chat users private messages without asking them.
● Abide by the rules created by those running the chat.
● Use emoticons to let other person know your feelings and expressions.
It is native iPhone app. It supports voice and video chats, file sharing, and group
Nimbuzz
chats with panache.
eBuddy IM helps to have all your buddies from multiple IM accounts in one single
eBuddy
list.
It has capability to link all your IM accounts together. You can log on to all of your IM
Imo.in
accounts by just logging into imo.in.
It offers video based chat between the clients to create video conferencing rooms
MeBeam
for up to 16 people.
Yahoo! It offers PC-PC, PC-phone, Phone-to-PC, file transfer, webcam hosting, text
Messenger messaging service etc.
3.IRC Commands:
Following commands are used while connected to an IRC server. Almost of the below
commands will work with most of IRC clients and servers.
Command Description
/away (message) Leaves a message let the others know why you are gone.
/clearall Clears all the text from all of the opened windows.
/dcc chat (username) Opens a chat window with the username that you specify.
/ping (username) Pings a specified user and it let you know how far they are in seconds
/whowas (username) Shows information about specified user that was in earlier.
Video Conferencing
4.Video conferencing or Video teleconferencing :is a method of
communicating by two-way video and audio transmission with help of telecommunication
technologies.
Modes of Video Conferencing
Point-to-Point
This mode of conferencing connects two locations only.
Multi-point
This mode of conferencing connects more than two locations through Multi-point Control
Unit (MCU).
Video Sharing
Video sharing is an IP Multimedia System (IMS) service that allows user to switch voice
calls to unidirectional video streaming session. The video streaming session can be initiated
by any of the parties. Moreover, the video source can be the camera or the pre-recorded video
clip.
Usenet newsgroup:
Usenet (USEr NETwork):
Usenet is a frequently updated collection of user-submitted notes / messages on a variety of
subjects that are posted frequently to servers on a worldwide network. Each of these
collections comprising of posted notes is called a newsgroup. There are hundreds of
thousands of newsgroups on the server and it is possible for any user to create a new one.
Most of these newsgroups are hosted on servers that are connected to the Internet, but these
newsgroups can also be hosted from servers that are not connected to the Internet. Usenet
continues to be an unrestricted, worldwide forum for debate and information exchange.
Usenet's original protocol in the early 1980's was UNIX-to-UNIX Copy (UUCP), but today
it's protocol is Network News Transfer Protocol (NNTP). So this is Usenet.
Like mailing lists Usenet is also a way of sharing information. It was started by Tom Truscott
and Jim Ellis in 1979. Initially it was limited to two sites but today there are thousands of
Usenet sites involving millions of people.
Usenet is a kind of discussion group where people can share views on topic of their interest.
The article posted to a newsgroup becomes available to all readers of the newsgroup.
What is a Newsgroup?
A newsgroup is an active online discussion forum that is easily accessible through Usenet.
Each newsgroup on the server contains discussions about some specific topic, which is often
indicated in the name or title of the newsgroup. Users who are looking for a particular
newsgroup can browse and follow them. Users can also post or reply to the topics they are
interested in, using a newsreader software. Access to these newsgroups also requires a Usenet
subscription. Most of the Usenet Providers have monthly subscription for $10 USD a month..
Newsgroup Classification:
There exist a number of newsgroups distributed all around the world. These are identified
using a hierarchical naming system in which each newsgroup is assigned a unique name that
consists of alphabetic strings separated by periods.
The leftmost portion of the name represents the top-level category of the newsgroup followed
by subtopic. The subtopic can further be subdivided and subdivided even further (if needed).
For example, the newsgroup comp.lang.C++ contains discussion on C++ language. The
leftmost part comp classifies the newsgroup as one that contains discussion of computer
related topics. The second part identifies one of the subtopic lang that related to computer
languages. The third part identifirs one of the computer languages, in this case C++.
● Reading Articles
If user wants to read article, user has to connect to the news server using the newsreader. The
newsreader will then display a list of newsgroups available on the news server where user can
subscribe to any of the news group. After subscription the newsreader will automatically
download articles from the newsgroup.
After reading the article user can either post a reply to newsgroup or reply to sender by email.
The newwsreader saves information about the subscribed newsgroups and articles read by the
user in each group.
● Posting an Article
In order to send new article to a newsgroup, user first need to compose an article and specify
the names of the newsgroup to whom he/she wants to send. An article can be sent to one or
more newsgroup at a time provided all the newsgroups are on same news server.
It is also possible to cancel the article that you have posted but if someone has downloaded an
article before cancellation then that person will be able to read the article.
● Replying an Article
After reading the article user can either post a reply to newsgroup or reply to sender by email.
There are two options available Reply and Reply group. Using Reply, the reply mail will be
sent to the autor of the article while Reply group will send a reply to whole of the newsgroup.
● Cancelling an Article
To cancle an article after it is sent, select the message and click Message > Cancel message.
It will cancle the message from the news server. But if someone has downloaded an article
before cancellation then that person will be able to read the article.
● Usenet netiquette
While posting an article on a newsgroup, one should follow some rules of netiquette as listed
below:
● Spend some time in understanding a newsgroup when you join it for first time.
● Article posted by you should be easy to read, concise and grammatically correct.
● Information should be relevant to the article title.
● Don’t post same article to multiple newsgroups.
1.UsenetServer is the most popular Usenet provider, similar to Newshosting, offering users faster
speed and longer retention period. But the number of newsgroups is comparatively low, around
80,000. It is preferred by majority of intermediate users.
2.Newshosting is without doubt the most efficient of all Usenet Provider. It offers great value for
money with fast speed, reliable service, a free newsgroup browser, as well as, a large number of
newsgroups (100,000). It also offers classic NNTP access / a modern web-based newsreader option.
Users can also avail a free trial account of 30 GB for the first 14 days.
3.Eweka is a Usenet provider which offers Usenet access in both block and flat rate payment
options. It’s affordable and a wise choice for the tech-savvy. There are Usenet providers all over the
world. Eweka – the company is based in the Netherlands, and it’s an affordable Usenet service that
put quality as their prime goal.
4.EasyNews is considered to be the best web browser for Usenet and one that is ideal for beginners.
It has a web-based interface that enables thumbnail viewing for people who are interested in
searching with images and video binaries. This newsreader a large user base and offers the best
customer service.
5.Fast Usenet offers excellent retention rates, a free trial, a mobile friendly newsreader and a web
newsreader as part of their core package. Fast Usenet also comes with a free copy of GrabIt
newsreader, offering built in global search which normally costs $2.50 a month. Included with your
membership
6.Giganews is another great Usenet provider but is comparative most expensive andslickest of all
providers. It offers a good customer service and comes with a free bundled newsgroup browser,
known as, Mimo. It also has the largest number of newsgroups (110,000). and offers a free trial of 10
GB to new users
Available Technology
There are different types of technologies available for maintaining the best
security standards. Some popular technical solutions for testing, building,
and preventing threats include:
Likelihood of Threat
Your website or web application’s security depends on the level of
protection tools that have been equipped and tested on it. There are a few
major threats to security which are the most common ways in which a
website or web application becomes hacked. Some of the top
vulnerabilities for all web-based services include:
● SQL injection
● Password breach
● Cross-site scripting
● Data breach
● Remote file inclusion
● Code injection
Preventing these common threats is the key to making sure that your
web-based service is practicing the best methods of security.
Thus, web security is easy to install and it also helps the business people
to make their website safe and secure. A web application firewall prevents
automated attacks that usually target small or lesser-known websites.
These attacks are born out by malicious bots or malware that automatically
scan for vulnerabilities they can misuse, or cause DDoS attacks that slow
down or crash your website.
Thus, Web security is extremely important, especially for websites or web
applications that deal with confidential, private, or protected information.
Security methods are evolving to match the different types of vulnerabilities
that come into existence.
his can include both personally-identifying information (PII) as well as
non-personally identifying information, such as your behavior on a website.
Without Internet privacy, all your activities are subject to being collected
and analyzed by interested parties!
Cookie profiling and other techniques are used to track your overall
activities online and create a detailed profile of your browsing habits.
Some people may not mind having relevant ads being served up to them,
but for others, this is a serious invasion of privacy.
2. Surveillance
Some governments spy on their citizens online to supposedly assist law
enforcement agencies. Take, for instance, the UK’s Investigatory Powers
Act that authorizes mass surveillance and allows the government to legally
monitor the Internet usage of its citizens.
Learn more about the UK’s Investigatory Powers Act here.
Internet companies (ISPs), telcos, as well as other communication service
providers are required to retain customers’ Internet connection records for
a year, which can be obtained by government authorities and used in
investigations – even if you’re not related to them in any way!
3. Theft
A staggering 17 million Americans have been affected by identity theft in
2017, according to Javelin Strategy. Cybercriminals use malware, spyware,
and phishing techniques to break into your online accounts or device and
steal your personal information to engage in activities like identity theft.
The victims, of course, end up losing most or all of their hard-earned
money, just because they didn’t exercise caution when it comes to
opening attachments, instant messages, or emails from unknown sources.
Your browser is the main program you use to go online, so make sure that
you take the necessary steps to secure it. After all, cybercriminals can take
advantage of loopholes in browsers to access the personal data on your
device. To protect your online privacy and security, we’d recommend that
you follow the recommendations in our ultimate browser security guide.
Using a VPN is the best way to protect your Internet privacy. Not only
does it change your IP address and assign you a new one based on the
VPN server you’re connected to, but it also protects your incoming and
outgoing traffic with military-grade encryption.
As a result, your online activities and personal information stay secure and
private from snoopers. Pure VPN is regarded as the best VPN when it
comes to online privacy and security, and for all the right reasons.
If you leave vulnerabilities in your software, chances are that the bad guys
will exploit them! Keep your operating system, browser, as well as other
software (like Adobe Flash and Java) up to date to ensure that you don’t
miss out on new features and security fixes. If you find it a hassle to
manually apply updates, you can always use tools to automate your
software updates.
4. Install an Anti-virus Program & Activate Firewall
You can keep your safe from harmful content on the Internet with a few
simple precautions. A strong anti-virus program will keep your device free
from all types of malware, such as spyware, viruses, Trojans, etc. You
should also activate your firewall to keep unwanted network traffic at bay.
The good news is that most operating systems come with it built-in.
You should delete cookies regularly as they’re used by websites,
advertisers, and other third parties to track you online. While you can clear
your cookies manually, you’re better off configuring your browser to
automatically delete them at the end of the browsing session. If you don’t
know how to, follow our guide to deleting browsing cookies
automatically at browser exit.
6. Adjust Your Settings on Google, Facebook, etc.
Take advantage of the options that are available to you. Big Internet
companies such as Facebook and Google usually give you options to
opt-out of some, if not all, of their personalization and tracking. For
example, you can manage your ads preferences on Facebook from here,
while Google allows you to turn off ads personalization from here.
An HTTPS or Secure Sockets Layer (SSL) encrypts your online
communication with that website. If you are on any website especially a
shopping website, you should ensure that you have an HTTPS connection.
For the utmost online privacy and security, you should resort to a VPN
service.
To defuse this threat, it’s advised that you use state-of-the-art AES 256-bit
encryption that will secure your internet connection, meaning you can
download and upload sensitive information without worrying about
anyone tapping in on your private data.
1. England
2. Singapore
3. Russia
4. Malaysia
5. China
Site blocking
Site Blocking is a process by which a Firewall or WWW Proxy prevents users
from accessing some network resources, such as World-Wide Web sites
or Ftp servers.
Note
Although blocking a website on the router is one of the best ways to prevent
access to websites, a child could still access the website through other
means. They could connect to a neighbor's network if it's unprotected or use
their cell phone's data service (e.g., 4G) to access the website. If you see any
open networks in your neighborhood, try to educate your neighbors about the
security vulnerabilities of leaving a network open.
C:\WINDOWS\system32\drivers\etc\
C:\WINNT\system32\drivers\etc\
127.0.0.1 localhost
127.0.0.1 badsite.com
127.0.0.1 www.badsite.com
Desktop
1. Visit the Block site extension page on the
Chrome web store.
2. Click the Add to Chrome button at the top-right
of the page.
Android mobile
To block sites on your
Android tablet or smartphone, follow the
steps below.