GEC WRITHLON IS THE OFFICIAL BLOG OF GEETA ENGINEERING COLLEGE. BY THIS BLOG, WE ARE PROVIDING KNOWLEDGE REGARDING THE LATEST TECH RELATED NEWS, MODERN ENGINEERING INVENTIONS, SCIENTIFIC GADGETS, AND SCIENTIFIC THEORIES

GEC WRITHLON IS THE OFFICIAL BLOG OF GEETA ENGINEERING COLLEGE. BY THIS BLOG, WE ARE PROVIDING KNOWLEDGE REGARDING THE LATEST TECH RELATED NEWS, MODERN ENGINEERING INVENTIONS, SCIENTIFIC GADGETS, AND SCIENTIFIC THEORIES.

Breaking

Showing posts with label COMPUTER SCIENCE. Show all posts
Showing posts with label COMPUTER SCIENCE. Show all posts

Sunday, 5 April 2020

Sunday, April 05, 2020

IoT (Internet of Things): Our Future

Life with IoT: 
In just one year alone, we went from having 5 million IoT devices connected to the internet to billions. 



The future is happening now, and these devices are getting smarter every day through machine learning and artificial intelligence. IoT devices are becoming a part of the conventional electronics culture and people are adopting smart devices into their homes faster than ever. It is estimated that there will be up to 21 billion connected devices to the internet. IoT devices will be a huge part of how we interact with everyday objects. There is big money in the IoT space currently, and it will only continue to grow as technology improves. The more data that IoT devices collect, the smarter they will become. Cities will transform into Smart Cities through the use of IoT connected devices. Think of Smart Traffic Lights that collect data on traffic, and use that data to synchronized lights to peak traffic times.



Overall, this improves cities overall efficiency and saves the government money since everything can be remotely managed. Smart homes, thermostats, lighting systems and coffee makers will all collect data on your habits and patterns of usage. All this data will be collected to help facilitate Machine Learning.

How do we security to these devices?
With the billions of IoT devices connected to the open internet, how do we ensure these devices are secure?
Encryption Scheme: AES vs. TLS/SSL
Encryption can solve a complicated problem. When people think about encryption, many will turn to TLS/SSL, however, these protocols don’t cut it for encryption and processing. The reason these protocols aren’t optimal is they are point-to-point solutions and not end-to-end solutions. When data has to go through many different points on the chain, we are going to have to account for different security protocols and devices. This requires a security solution like AES (Advanced Encryption Standard) since it provides end-to-end security, and encrypts the message all the way through. Only devices with the encryption keys can decrypt the encrypted data as it’s sent and received.



AES also allows you to wrap the message body with AES and leave all the actionable data in TLS. Actionable data, for instance, would be temperature information that you are trying to read. In addition, we need to prevent all inbound ports from being open at all costs since this can leave your IoT devices open to vulnerabilities and DDOS attacks. Devices should only make outbound connections, so that way the door is closed to accessing applications and services behind those open ports. The connection outward can be left open so the device can listen in with a secure tunnel back from the network.
Rather than trying to fit all of the IoT Protocols on top of existing architecture models like OSI Model, we have divided the protocols into the following layers to provide some level of organization:

  1. Infrastructure (ex: 6LowPAN, IPv4/IPv6, RPL)
  2. Identification (ex: EPC, uCode, IPv6, URIs)
  3. Comms / Transport (ex: Wifi, Bluetooth, LPWAN)
  4. Discovery (ex: Physical Web, mDNS, DNS-SD)
  5. Data Protocols (ex: MQTT, CoAP, AMQP, Websocket, Node)
  6. Device Management (ex: TR-069, OMA-DM)
  7. Semantic (ex: JSON-LD, Web Thing Model)
  8. Multi-layer Frameworks (ex: Alljoyn, IoTivity, Weave, Homekit)

Conclude: Machine learning and IoT together can do a miracle in today's modern technology, if both of them are used with good understanding.

AUTHOR:



Sunday, April 05, 2020

COVID -19 EFFECTS

The COVID-19 is creating havoc on different countries all over the world. It caused a health crisis as well as shutting down economies because of social distancing. 


It also effect earth’s environment as it restricted the movement of human beings.
In few days, we surely witness a big difference in the quality of air. Due to CORONA already air pollution is reducing reason being factories, industries and all forms of transportations is on halt.
According to the reports CHINA and ITALY industrial areas are showing drops in nitrogen dioxide. Reductions on the amount of vehicles on the roads are one of main impacts of work from home and social distancing.
Industrial activities are reduced but not all of them. For example, Power Plants are running to produce Electricity, Water Treatment plants that are needed to treat water.
In Venice, the pandemic is resulted in unusually clear water in Canals. It may be lack of boat traffic.  Accordingly, may be this Virus is putting our mother Earth in time out for a while...while she cleans up the mess people made.


In THAILAND and JAPAN, mobs of monkeys and other animals are roaming in streets because of no tourist on the road.

Although all of the above effects are temporary and an economical effect will depend on how countries, respond to the crisis. As all population around the world is taking precaution against COVID-19, I think one of the main thing about it that earth bounces back, It’s kind of different and interesting point about the moment is that everyone is under a lot of stress and in condition of panic and by reconnecting with the natural world   it may help them to deal with their current mental condition.


On top of Air and Water, Wildlife may respond to changes too. We have noticed how many birds are chirping outside. A hope is needed when we get back to natural, whatever that may look like, and we can hold the balance between work, family and the whole environment. This is the thing which will give real peace to us. 


AUTHOR:

Dr. MAMTA HOODA



Saturday, 4 April 2020

Saturday, April 04, 2020

DATABASE: SQL vs NoSQL


DATABASE: SQL vs NoSQL
With the advent of huge amount of data in technological era, SQL and NoSQL databases are majorly used. SQL stands for Structured Query Language which is used mainly for relational database management system and NoSQL refers to Not Only SQL which is a collection of non-relational data storage systems.
There are different types of RDBMS such as MS Access, Oracle, MySQL, Informix, SQL Server, Sybase which uses SQL as their standard database query language. Today SQL databases become an unavoidable part of IT department of any organization. MySQL is a Relational Database based SQL implementation for the web which is now being used in very large-scale websites such as Facebook etc.

There is relaxation of one or more of the ACID properties in NoSQL. Some of the NoSQL databases are Cassandra, CouchDB, Hadoop & Hbase, MongoDB, StupidDB. The aim of this blog is to compare these two SQL & NOSQL databases and tries to figure out which is better in terms of their performance & scalability.
It is a well known fact that SQL databases have taken over the world of data technologies and have been the primary source of data storage over decades. SQL databases are generally for accessing the relational databases.
Soon enough data started growing exponentially and scalability became a major issue, at that time NoSQL rolled in to save the day.NoSQL databases existed since 1960, but recently they have gained more popularity especially for providing the scaling feature.
The major difference between MongoDB and SQL Databases is the way of handling the data. In SQL databases, data is stored in tabular form of two-dimensional row-column structure while in MongoDB document model is followed, which allows storage of any type of data.


Following are the major differences between SQL and NoSQL databases :

SQL Database
NoSQL Database (MongoDB)
Purely relational database
Non-relational database
Supports SQL query language
Supports JSON query language
2-dimension Table based
Collection and key-value pair based
Row & Column based
Document & Field based
Data storage requires predefined structure
Stores data in flexible manner
Support triggers & foreign key
Doesn’t support triggers foreign key
Contains predefined schema
Contains dynamic schema
Not for hierarchical data storage
Best for hierarchical data storage
Vertically scalable (increasing RAM)
Horizontally scalable (add more servers)
Follows ACID properties (Atomicity, Consistency, Isolation and Durability)
Follows CAP theorem (Consistency, Availability and Partition tolerance)


MongoDB Documents makes it easy for developers to map the data used in the application to its associated document in the database. While in SQL Database, creating a table with columns mapped to the object’s attributes in programming language is a little tedious.
When it comes to choosing a database, one of the biggest decisions is selecting a relational (SQL) or non-relational (NoSQL) data type structure. While both options are viable, but still there are certain key differences between the two and user chooses the option according to his requirements.


Friday, 3 April 2020

Friday, April 03, 2020

What is probability?

Probability is the basic need of communication in Engineering, we discuss all possibilities
That can take place.   Basically, it tells about chances of any outcome can take place.


All software, we make use all possibilities and their chances of favour and against.

In the cases where sample space is very small, we normally use favourable outcomes over total outcomes but when sample space is very large, we can’t make it manually, we are dependent on binomial, poisson and normal distribution. When p(SUCCESS) is intermediate in size and n(no. Of outcomes) is large, we use binomial distribution and when p is very small and n is very large, we use poisson distribution.
Probability is a way that many people understand basically. Since words like “proportion”, “likelihood”, “chance” and “possibility” are used in everyday speech. Following are the examples of fact statements of probability which might be heard in any of the business situation: -
1)There is a 30% chance of this job not to be finished in time.
2)There is every likelihood that the business will be making a great profit the following year.

The concept of probability is really very important. It has discovered a very extensive application in the development of every physical science. Sometimes a person explains without actually discussing probability. The probability is a way which mainly measures the degree of uncertainty and therefore of certainty of the occurring of events.

There are some situation for all of you doing communication subjects in branches ECE, CSE,IT
They all must practice of following questions
                                                                                                                       



Daily life situation
1. Suppose that the reliability of a HIV test in specified as follows:
Of people having HIV, 90% of the test detect the disease but 10% go undetected. Of people free of HIV, 90% of the test are judged HIV –ve but 1% are diagnosed as showing HIV +ve. From a large population of which only 0.1% have HIV, one person is selected at random, given the HIV test, and the pathologist reports him/her as HIV +ve. What is the probability that the person actually has HIV ?
2. A doctor from panipat is to visit a patient. From the past experience, it is known that the probability that he will come by train, bus, scooter or by other means of transport, are respectively   The probabilities that he will be late are  , if he comes by train, bus and scooter respectively, but if he comes by other means of transport, then he will not be late. When he arrives, he is late. What is the probability that he comes train?



3. Shakuni mama is known to speak truth 3 out of 4 times. He throws a die and reports that it is a six. Find the probability that it is actually a six.

4. Lal path lab panipat blood test is 99% effective in detecting a certain disease when it is in fact, present. However, the test also yields a false positive result for 0.5% of the healthy person tested (i.e., if a healthy person is tested, then with probability 0.005, the test will imply he has the disease). If 0.1 per cent of the population actually has the disease, what is the probability that a person has the disease given that his test is positive?

5. Assume that the chances of a patient having a corona is 40%. It is also assumed that a meditation and yoga course reduce the risk of heart attack by 30% and prescription of certain drug reduces its chances by 25%. At a time, a patient can choose any one of the two option with equal probabilities. It is given that after going through one of the two options the patient selected at random suffers a corona. Find the probability that the patient followed a course of meditation and yoga?


6. In answering a question on a MCQ test with 4 choices per question, a student of GEETA ENGG COLLEGE PANIPAT knows the answer, guesses or copies the answer. Let  be the probability that he knows the answer,   be the probability that he quesses and   that he copies it. Assuming that a student, who copies the answer, will be correct with the probability  , what is the probability that the student knows the answer, given that he answered it correctly?

IN ALL ABOVE CASES WE ARE ONLY DEPENDENT ON PROBABILITY

Author


Monday, 30 March 2020

Monday, March 30, 2020

The new Programming Language: Go Language



Over the last decade, Google (now restructured under the parent company Alphabet, Inc.) has diversified into practically every tech sector there is—from mobile devices, mobile operating software, and AI to robotics and therefore the Internet of Things (IoT). 


Together of the world’s biggest tech companies, it had been only a matter of your time before Google came up with its own programming language.
And so, in 2009, the Go programming language was born. Created by Robert Grasmere, Rob Pike, and Ken Thompson, Go (also referred to as Golang) is an open-source language that first began development in 2007.
Like many other Google projects, Go is open source, meaning that the programming language is open and freely available. this enables anyone to contribute thereto by creating new proposals, and offering fixes to bugs, making the language faster and better for all users.

What Makes Golang Unique?
Go’s design takes inspiration from other languages like C, Algol, Pascal, Oberon, and Smalltalk, Go springs primarily from the Oberon language. At an equivalent time, its syntax is analogous to C. Meanwhile, Go’s object-oriented programming (OOP) is analogous to Smalltalk’s, except having the ability to connect methods to any type. Finally, Go’s concurrency is usually taken from New squeak—another language developed by Golang co-creator Rob Pike.
While Go language is heavily inspired by C, it also comes with additional features, such as:
• Garbage collection
• Native-style concurrency
• Quick compiler
• Pointers


These are just a unique couple of the various native features that allow developers to bypass writing long lines of code to handle memory leaks or networked applications. It’s for this reason that Go is especially ideal for developing cloud-native applications and distributed networked services.
The fact that Go is such a replacement language (relative to industry stalwarts like C, Python, and Java) is additionally one among its main strengths. Go was designed at a time when multicore processors, computer networks, and enormous codebases were already the norm. As such, Go excels at learning quickly. It’s also a breeze to figure with and straightforward to read—characteristics that have made it one among the world’s top programming languages a decade later. In fact, consistent with the Index for October 2019, Go is that the 17th top programming language. 

Advantages of Go
Easy to Use and skim. Go might not have the recognition of JavaScript or Python, but it’s a top 20 programming language for a reason thanks to an important characteristic it shares with them: Go is straightforward to use and understand.
Go’s syntax is straightforward, with a forgiving learning curve that creates it more accessible to novice programmers. It also helps that there aren’t too many complex functions to find out . But aside from being friendlier to newbies, Go’s slick and clean syntax makes it perfect for legacy code which will need multiple programmers writing different versions of code on top of 1 another. And if you’re someone who is already proficient in C# or C++, learning Go should be more accessible thanks to its striking similarities to C. 


Impressive Standard Library. Go users have access to a powerful standard library that comes packaged with the language, which saves the difficulty of importing or learning complex secondary libraries. 
Go’s standard library is sophisticated but not confusing, helping reduce the danger of issues from conflicting function names. for instance, the addition of slices is one among Go’s best contributions to programming because it offers a more straightforward way of integrating data structures into code blocks. Go consolidates what would rather be complicated workarounds in several languages into one line of code through its interface.
Strong Security. More often than not, simpler code is safer and safer than complicated code. an equivalent concept applies to travel. and since it’s a statically typed language, Go users don’t need to worry about having to avoid and appearance for hard-to-identify errors—challenges that are par for the course with more dynamic languages and their sizable amount of variable types.


The inclusion of a garbage man also helps prevent memory leaks. And while Go’s lack of generics means programmers got to be more careful when running tests, its simple identifying errors compared to other languages means Go lends itself to a more thorough approach to writing clean code.
The Google Name. this might not appear to be a plus intrinsically, but Google’s size and stature within the tech industry promises a secure future for Go. Sure, Google has been known to bet big on ambitious projects and platforms (see Google Glass, Google Reader, Google+ and other products within the Google graveyard), it doesn’t appear as if the corporate will abandon Go anytime soon.
It also helps that a number of the world’s biggest companies are using Go, with names like Uber, Twitch, Medium, Docker, BBC, and Intel, all using the language in their projects. If anything, all of this goes to point out that Go will likely be a fixture in Google’s architecture for several years to return. This also means now's nearly as good a time as any to find out Golang. 

Author:




Saturday, 28 March 2020

Saturday, March 28, 2020

Optimal Resource Allocation: Must for Quality Management


In strategic planning in industry, resource allocation is a plan for utilizing the available facilities and resources, to achieve the targeted/specified goals. 


It is the process of allocation of all available resources/facilities to the various sections of an Industry/organization. Industry management is responsible for the allocation of the facilities/resources to achieve industrial objectives. In an industry/organization, decision making is done by the management. To execute this decision making, the management of an industry/organization requires complete information about the available facilities, resources and their relative effectiveness for achieving the organizational targets and objectives. Plant resources are acquired, manipulated and allocated under the control of the head. In actual situations, very limited resources are available for system upgrading and improvement. 


Accurate information and well-set objectives are required to define well in advance, to be assured that the right amount of resources and facilities are assigned to the right places and at the right time. Once the forecaster demand is materialized and accepted, the available facilities and resources are allocated with certainty, under the existing plant or industry conditions, which seemed to be most realistic. The working of a thermal power plant is analyzed to allocate the available resources and facilities optimally to achieve the long-run availability, without compromising on quality. Therefore, a great need to optimize the resource allocation process is realized. The problem of resource and facilities allocation arises when the available facilities are not sufficient to satisfy the market demands during the industrial operation. Mostly, the plant managers face various challenges like limited money, manpower, machines or other facilities. Thus, in order to make the power plant operation effective, the allocation of the required resources and facilities to the plant is very essential. 


The resource management system plays a very crucial and important role in matchmaking the resource demanded and proportionately resource allocated, along with their optimum utilization. So many techniques are available to us for allocation of the resources and facilities optimally but have few limitations like complexities involved in implementing these techniques for real situations. The various software can also be used to save large computation time for optimal resource allocation problems but to be familiar with software and calculating expertise don’t substitute for creativity and diligence in resource planning and allocation. These optimal resource allocation techniques are very useful in the evaluation of large data but don’t always validate the data. Efforts should be put up to allocate the budget associated with the manpower and maintenance in a thermal power plant. It is a complex problem and has to be formulated as DPP (Dynamic Programming Problem). 


Therefore, a Resource Allocation Model (RAM) must be developed to resolve this issue. RAM is a form of a recursive mathematical relationship using which, the various resources can be allocated to thermal power plant optimally. In developing a Resource Allocation Model, the complete process of a thermal power plant has been considered consisting of n stages (systems) under the constraints of maintenance cost and manpower cost. The problem must be treated as a multistage decision problem.

Author:




Sunday, 22 March 2020

Sunday, March 22, 2020

FOG COMPUTING


Fog computing is an emerging technology that is used for the Internet of Things. Fog computing fetches data and services from the network center to the network edge. 


Similar to Cloud, data, compute, storage, application services are given to the end-users by the fog. Fog computing is a distributed computing model that fetches centralized located data storage, processing, and application and given to the network edge device (set-top box, access point). Fog computing is a technique that is locally hosted where the user uses the service. To be simplified fog computing is a model that provides IoT data processing, storage instead of sending it to cloud it is locally processed in smart devices. 


Both Cloud and fog structures are for computing, storage, and networking resources.

In fog computing data collected by sensors are not sent to cloud servers instead it is sent to devices like network edge or set-top box, routers, the access point for processing thus reducing the traffic due to low bandwidth. Fog computing improves the standard of service and also reduces latency. Small computing works are locally processed and responses are sent back to the end-users without the use of the cloud.


So, fog computing is emerging as a better option than cloud computing for smaller computing works. Fog computing plays a crucial role by reducing the traffic of knowledge to the cloud. Since the fog system is placed almost the info sources computation and communication aren't delayed.

CISCO gives us an example of the jet engine. Whenever the jet engine is connected to the internet, half an hour running time of the jet engine creates 10 TB of data. 


This huge data itself will create big traffic in the bandwidth which cannot be neglected. So comes the importance of fog computing. Fog computing is complementary to the cloud. Certain features of fog computing differentiate it from the cloud, Fog Computing is used for real-time interactions but cannot replace cloud computing as it is preferred for high-end batch processing. As the name suggests cloud system is placed at a distance whereas the fog system is placed locally near to the end-user.



EDGE COMPUTING
Edge computing allows data produced by the internet of things (IoT) devices to be processed closer to where it's created rather than sending it across long routes to data centers or clouds. Doing this computing closer to the sting of the network lets organizations analyze important data in near real-time – a requirement of organizations across many industries, including manufacturing, health care, telecommunications, and finance.



It is typically mentioned in IoT use cases, where edge devices would collect data – sometimes massive amounts of it – and send it all to a knowledge center or cloud for processing. Edge computing triage the info locally so a number of it's processed locally, reducing the back haul traffic to the central repository.


Typically, this is often done by the IoT devices transferring the info to an area device that has computing, storage, and network connectivity during a small form factor. Data is processed at the sting, and everyone or some of it's sent to the central processing or storage repository during a corporate data center, co-location facility or IaaS cloud.

AUTHOR:

DEEPIKA ARORA