Wednesday, 26 August 2015

Virtual Tenant Network


By Harpreet Singh Dhillon

Virtual Tenant Network (VTN) is an application, which provides multi-tenant network facility on an existing network infrastructure. The traditional network infrastructure is configured as a silo for each department, which requires huge CAPEX & OPEX cost investment. Every department in an organization needs separate proprietary hardware, which need not be shared with others.

VTN provides a logical abstraction plane, which facilitates complete segregation of the logical plane from the underlying physical infrastructure. VTN divides physical infrastructure into multiple logical parts and maps it to the physical network. This will not only reduce complexity, but also provides better management of resources and brings efficiency in infrastructure. NEC Corporation of America is contributing in VTN project for open daylight.

Architectural Overview

VTN application has been divided into the following two components:
  • VTN Manager: It is the plugin that interacts with other networking components to build network for the end user. 
  • VTN Coordinator: It is an external application that provides a REST API interface to the end user to build VTN network. The VTN coordinator passes request to VTN manager, which serve the request of end users and build the desired network. 


Fig. - Architecture of VTN

Challenges in Current Infrastructure
  • Lack of flexibility and agility: The current network infrastructure does not provide any kind of flexibility across network devices. Due to this limitation, network devices do not support multi-tenancy and the network appliances are running in silo for each department. 
  • Complexity: Network is becoming more complex due to large and increasing variety of proprietary hardware appliances provided by various network vendors. 
  • Manually intensive management: Provisioning and configuration for network appliances are complex, manually intensive and time-consuming tasks. 
  • Huge CAPEX & OPEX investment: Each department requires separate proprietary hardware appliance, which is a huge investment for purchasing of new hardware and resources to manage the infrastructure.

Benefits of VTN
  • Reduce Capex: VTN provides multi-tenancy feature on a network infrastructure. Therefore, the same physical infrastructure will be used by multiple departments. This will reduce the need of separate hardware for each department and certainly decrease the Capex investments. 
  • Reduce Opex: VTN provides centralized management of network infrastructure through Software Defined Networking and eliminates manual efforts, thus enabling automation in a network. This reduces manpower costs and saves huge opex costs for an organization. 
  • Flexibility: The VTN facilitates easy, rapid and dynamic provision for new services in various locations. 
  • API support: The VTN supports REST API, which helps the network to integrate with the infrastructure orchestrator layer and automate the entire network provisioning and configuration management. 
Conclusion

The network technology of the present day is very frequently evolving. With the evolution of cloud infrastructure, organizations are facing pressure to cut down their OPEX and CAPEX costs. VTN is the best suited technology, which not only reduces infrastructure costs, but also eliminates the complexity in the existing infrastructure.

The Next Generation File System – ReFS

By Chetan Kumar

Resilient File System (ReFS) is the new file system for the next generation of Windows Operating Systems. This file system is introduced with Windows Server 2012 Operating system and is designed to overcome the shortcomings or issues that were faced in NTFS (New Technology File System), a prominently used file system so far. Future version of Microsoft Operating Systems and the applications (Exchange, SQL Server, etc.) has the added support for ReFS.

Why Use ReFS?

The large customer base that relies on Windows OS for running business applications wants a cost-effective and reliable platform that provides data availability and data integrity. As the amount of data required for operating businesses is increasing, there is a need to scale efficiently across different workloads. ReFS is better than NTFS in many ways. The most important advantage that comes with the new file system is “Resiliency”. This file system is built by reusing the code from NTFS engine to maintain high degree of compatibility with features of NTFS carried forward in ReFS.

Key Features
  • Integrity: ReFS helps to ensure detection of all forms of disk corruptions 
  • Availability: ReFS gives priority to the availability of data. This implies that if some corruption occurs, the repair process focuses on the corrupt area and does not require the disk to be taken offline for repair. It performs all repairs online.
  • Scalability: As the volume of data has grown enormously in today’s world, ReFS is designed to work efficiently with large sets of data without compromising on performance.
  • Proactive Error Correction: ReFS proactively works in the background by running the integrity scanner periodically and initiates the repair of corrupt data.
Deployment Use Case

ReFS has the capability to help customers store data, irrespective of the reliability of the underneath stacks (hardware and software). This minimizes the storage cost and reduces capital expenditures for businesses. Customers can deploy a Windows based file server attached to inexpensive storage like JBOD (Just a Bunch of Disks). Further, the deployments can include failover clustering, which uses a shared JBOD storage configuration.

Conclusion

ReFS is the new file system, which is believed to replace NTFS in the upcoming releases of the Windows Operating Systems and Application Software. It brings major improvements in terms of data integrity and reliability, which ensures that corrupt data is easily found and repaired. Hence, ReFS can be a widely used file system in the future that could reduce the total cost of ownership on Windows Servers.

Global Insurance Industry – A Paradigm Shift

By Ashish Mishra

Calling it a paradigm shift! In a world where being digital is not a very new thing, it also does not thrill our industries. Be it digital health insurance, digital health support for insurance or telematics, nothing is new or exciting now. We are in the ‘post digital’ era.

Then, where exactly is the paradigm shift? And, how different is it or would it be from the current techno-crunch?

I would share a different view - A shift where the real motive behind digital health is to ease the health services. It can also mean Usage-Based-Insurance or Pay as You Drive. We have some examples of it i.e. telematics.

For the discussion, an insurance plan equipped with much customization will be an ideal example for segmentation. Therein, an individual’s plan would be focused on only Mr. X or on a very small group of people, although it is not considered as a feasible solution so far.

However, the yet to be completely explored “Data and Analytics” can really help this industry reaching an acute level of market penetration. We have started seeing proliferation of big data and various analytical techniques for extracting different meanings through it.

Connecting the same point, enhanced end user experience and satisfaction can be sought through simplifying the organizational functioning. Its business model will be key to recapture the market. If the investment through the accumulated money is turning red, then profit maximization of can be achieved through operational efficiency. The implementation of data analytics would also give a competitive advantage, along with increased profitability.

But, what else would companies acquire? Certainly, it shall be loyalty from their customers, cost savings and scaling their businesses. At the same time, the companies will be facing bigger challenges. For instance, many companies will be with the similar modus operandi. Customers or end users will be generally accepting all the advertisements.

Thanks to the Sleeper Effect, which slowly fades away the source or credibility of the advertisement messages and eventually, a user is left only with the persuasive message.

Coming back from the tracks of “Let’s reap all the benefits” (precisely companies expect so) to “Let’s all reap the benefit”, will certainly turn the tables.

Data security can also be a primary policy driver. Companies can upgrade their core, seamless and robust systems, and implement the latest insurance business model that focuses more on data security. This model could be next key winning factor for insurance companies, which enables them realize greater responsibilities towards their customers. An insurance company that can promise data security and also deliver 100 percent results, will always stand out as a winner. The more an insurance firm anticipates challenges and handles data security diligently via technological readiness, the lesser is the failure and its impact. These are the major factors such companies require to focus upon.

In my opinion, designing of any innovative technology is everything! Designing of securities, data centers, big data and its architecture is going to be the focal point in today’s world. And, from “Big is beautiful” to “Small is beautiful”, and eventually, just “Being beautiful“ will apparently drive the entire ‘post digital’ era.

Thursday, 20 August 2015

Internet of Things (IoT) for Developing E-Labs

By Swathi K

With the enormous growth of the internet and its usage, the demand for more technologies and applications in every field is ever growing. Users are no more comfortable doing their chores in person; they rather opt to obtain everything at their fingertips through various technologies.

Laboratory experiments are an integral part of science and engineering education. Automation is changing the nature of these laboratories, and the focus of the system designer is on the availability of various interfacing tools to access the laboratory hardware remotely with the integration of computer-supported learning environment. In engineering, laboratories have had a central role in the education of engineers. The first kind of distance education included graduate programs intended primarily, if not solely, for part-time students who were employed full time. Since most graduate programs do not include a laboratory component, the question of how to deliver laboratory experiences did not arise. As undergraduate distance learning programs started to develop, this problem demanded a solution, and IoT (Internet of Things) is the solution.

A Remote Laboratory is a workbench, which enables us to remotely conduct real experiments. With the growth of the Internet and IoT, Machine-to-Machine communication has grown its demand for new technologies to connect billions worldwide.

E-Lab provides users the ease to explore operating hardware even if the user is physically present at a separate geographical location, connected through their regular mobile devices. An E-lab is built on wireless sensor technology that enables the user to exchange data and control the machine. This can be achieved with a simple combination of mobile service and applications.

This approach presents the novel design techniques of hardware system that develops a remote/wireless Electronics Lab. The approach is to use the GSM to provide students with remote access to physical laboratory apparatus. An application (app) is created with help of JAVA to handle the hardware. Systems of this type are synchronous, giving students a sense of actual involvement in the experiment. A PC is used with Dotnet programming to interface the webcam option. The PC will be interfaced with microcontrollers for controlling different units. The PC uses Internet service to E-mail Video clippings of Hardware setup and its working to the students’ e-mail Id. This also provides safety measures for all humans involved in the learning process. Since GSM mobile phones are now widely used, this is the best and easy way to access any remote laboratory.

Advantages
  • The user can login and carry out experiments from any geographical location.
  • The user will have no time constraints. 
  • A remote lab provides extended access to expensive and highly specialized devices or setups.
  • Economic usage - sharing labs also reduces the utilization and maintenance cost.
  • Provides security through user authentication and has no risk of catastrophic failure.
  • Provides safety measures for all humans involved in the learning process. Any damage during the conduction will not harm anyone in person. Example: Advanced high–voltage or chemical laboratories. 
  • Increased efficiency - improved communication leads to faster transfer of information.
Future Scope

Live watch - to enhance distance learning, laboratories can provide LIVE streaming or provide video clippings with the use of internet service.

Microsoft Exchange 2016: The vNext

By Chetan Kumar

The upcoming version of Exchange Server is built on the architecture of Exchange Server 2013. The version next is further refined to cater and suite the deployments of all scales. New version of the product is evolved from Office365 and enables both On-Premise and hybrid deployments. The official release of the product will be in 3rd Quarter of 2015.

Architecture

Exchange Server 2016 eliminates Server roles completely and hence is a ‘Single Role’ product. Microsoft pushed the product to a new version to simplify the product architecture and improve its capabilities. Refer Figure 1 the architecture changes in the new product.


Figure 1- Architecture Design

The mailbox server role hosts all the components to process/render/store data and contains the logic to route requests to the correct target endpoint. With the elimination of CAS role, communication between the servers still occurs at the protocol level.

Improvements
  • Search Improvements: Search is improved for Outlook online mode clients. Network bandwidth requirements between Active and Passive database copies is reduced. 
  • Document Collaboration: Integration with Office Web App Server added the functionality of editing documents in Outlook web access. 
  • Extensibility: REST APIs are now available in Exchange Server 2016 that allow the developers to connect from any platform and simplifies programming for Exchange.
  • Outlook Connectivity: MAPI/HTTP is the default protocol enabled for users in Exchange Server 2016. 
  • Coexistence with Exchange Server 2013: It is comparatively easier to move or coexist with your existing Exchange Server 2013 deployment. 
Key Benefits to Customers
  • Simpler Deployments: With the Exchange Server 2016 architecture, identical Exchange Servers with respect to hardware, configuration, and so on, make deployment simpler. 
  • Reduced Infrastructure/Software cost: New architecture of Exchange Server 2016 reduced the number of physical Exchange Servers as compared to previous versions, which lower the investment in infrastructure, operations and software licensing. 
  • Faster Recovery: The new version provides faster failover/recovery of databases (around 33%), disk IOPS are further reduced, automated database repair detect and fix divergent database copies. 
Conclusion

Exchange Server 2016 is laid on proven architecture and flexible future-ready foundation, which continues the vision of reducing the complexity of architecture. Customers running the previous version of the product can easily adapt new versions of the product to their existing deployments. The new product version helps to simplify the messaging environment and increase the availability/resiliency of the deployment.

Thursday, 13 August 2015

Affinity Management Services

By Harpreet Singh Dhillon

The network services facilitate communication between different infrastructure elements like virtual machines, storage and the end-user. The conversation between these elements is called ‘Affinities’. In today’s world, applications are dynamically changing, but the network is still running as a silo without any intelligence. The applications such as openflow, netconf and SNMP, which are introduced in the software defined network are only concerned with network device performance and do not focus on application performance.

Affinity management service is an API in a SDN network, which maps application needs with infrastructure. This service can be formally described to a centralized SDN controller in all aspects of the applications like workloads, communication patterns and bandwidth needs. The SDN controller will determine what sort of path the application should take for e.g. latency sensitive application that needs to be provisioned to lowest hop count or bandwidth intensive application that needs guaranteed bandwidth inside the application path.
                     
Architectural Concept of Affinity Management Services

Challenges in the Current Infrastructure
  • Lack of intelligence and flexibility: Current network infrastructure does not provide any kind of flexibility and intelligence in network devices. The SDN protocols like openflow, Netconf and SNMP are only concerned about network device performance and do not support application performance.
  • Huge business loss: The current network infrastructure is facing issues like network traffic congestion and latency, which degrades the performance of business applications and results in huge business loss to organizations.
  • Manually intensive management: Presently used traditional network is manual, which requires manual efforts for configuration to improve application performance. Thus, it is a very time consuming task and builds complexities in a network environment. 
Benefits of Affinity Management Services
  • Improved application performance: The SDN controller uses affinity information and optimized network to improve application performance up to a great extent. The affinity describes network resource needs in terms of service level rather than per device configuration.
  • Easy customization: The affinity management service provides user interface directly to the end-user where he can easily use customized network as per the application demands without any intervention of the administrator.
  • Scalability: Dynamically scale network resources based on application requirements.
  • Automation: Affinity management services provide automation in the network as SDN controller gains excess to workload communications and can easily automate the entire optimization of network resources’ usage.
Conclusion

Enterprises are moving towards highly dynamic infrastructure, which maps to their business applications and generates huge return on investment and lowers investment costs. Affinity management services are the best solution for enterprises, which can optimally utilize the entire infrastructure and generate high network performance. This will map the application needs with infrastructure and provision network resources as per application requirement. The affinity management service refines the way network resources are deployed and dynamically change entire architecture of traditional datacenters.

Thursday, 6 August 2015

Driverless Cars and Insurance

By Harshith Ail

Technological innovation has been ruling the present day world. Individuals encounter a new technology in every walk of life. And just when we think innovation has reached its saturation point, we come across a new technology, which was never thought of before. Driverless cars are one such innovation.

Equipped with self-driving mechanism, driverless cars have become one of the most awaited cars in present day world. These cars are currently under road testing and will take around a decade before they are available for the general public. However, it is already being forecasted that this innovation could be a game changer for many industries.

This new technology could bring with itself challenges to insurance companies and regulatory bodies in the countries wherein it is brought to practice. The California Department of Motor Vehicles has raised a concern that "The technology is ahead of the law in many areas" and states that, as per the law, a human being should operate the vehicle. There is hence an argument between policy makers and regulators that a new law should be brought into picture for such a breakthrough technology.

Impact on Insurance Companies

Driverless cars could also have an adverse effect on insurance companies, as these cars are known for their low collision rate. Fewer number of collisions means fewer insurance claims. This will prompt the insurance companies to lower their premiums, which will in turn result in a challenging environment in the insurance market and increased competition. The blend of driverless cars and Usage Based Insurance (UBI) could change the way automobile insurance is operating at present. Moreover, these cars could also affect the ROI of the insurance companies, as these companies will now concentrate more on research.

Driverless cars work using the internet and inbuilt cameras. Hence, there is a possibility that insurance companies could tie up with these device manufacturers to gather data. Insurance companies with legacy applications could face more challenges. It may become necessary for them to upgrade their applications to support these devices. This need translates into an opportunity for the IT industry.

In case these cars come into use, the companies which make the driverless systems will have to bear the blame for accidents, but not the drivers. . On the other hand, general public is with an impression that this can be an alternative for errors on part of the drivers, like drinking & driving, not following lane discipline, etc.

Conclusion

This new technology will make an insurance company to adopt different approaches towards designing their products. It is too early to decide whether it will be a boon or a bane for insurance companies, but, it can certainly be said that automobile insurance will not be the same anymore.