Internet 2 Global Summit – The Future of the Internet

Larry Peterson, Princeton

Cloud to the Network Edge, scalable and elastic. Shouldn’t we do that in the network? Bringing the the scalable and elastic into the network itself. If you take the cloud to the edge of the network you scale the bandwidth of the network. Taking commodity processors out of the data center and embedding them in the network. What applications drive this? Content Distribution Networks, delivering streaming data. Network Function Virtualization – taking functions out of boxes and put them in VMs – firewalls, IDS, etc. Software Defined Network – separate the data plane from the control plane, and move the control plane out into the cloud. 

First Principles:

Cloud – demonstrates best pratices in scaling a function

SDN – treats control plane as a programmable function

NFV – treats the data plane as a programmable function

The key is to scale functions, whether databases, object stores, SDN controllers, proxies, firewalls, etc – “Network Services”

A service exports a logically centralized interface to network-wide functionality, while having many points of implementation distributed across the network. eg, SDN Controller distinct from Packet Switch, Access Control Service distinct from Firewalls, CDN Service distinct from Caches.

Future of Programming – building a service is Software and Operating Spec, implemented in a network of VMs mapped onto Cloud physical infrastructure. Opertionalizing software is the hard part, not writing the software. Services can be composed of composite services.

Syndicate Storage Service – Caches and Request Routers (CDN) in the network; Persistent stores in S3, Box, GenBank, etc. If you can harness both sides of that you wouldn’t have to build separate proxies for distribution. Using a Metadata service built in Google App Engine. Applying the idea of unix pipes to the network. 

Mike Hluchyj, CTO Carrier Products, Akamai – Network Functions Virtualization – Network Operator Perspective

NFV – Today you build networks by putting purpose built boxes into racks and connecting them. The better way to do it is to virtualize those functions. Now in the rack you se off-the-shelf servers and switches and the network functions exist in software.

Why? Save money/make money; reduce equipment costs and power consumption exploiting economies of scale; reduce time-to-market by minimizing deployment cycle; greater flexibility to scale up or down or evolve services; openness to virtual appliance market and pure software entrants; opportunities to trial and deploy new innovative services at lower risk.

Fields of NFV Application: Switching elements; Mobile network nodes; home routers and set-top boxes; tunneling gateway elements; traffic anlaysis; service assurance; NGN signaling;  converged and network-wide functions; application-level optimization; security functions

NFV Enablers: Leverage technologies developed for cloud computing; cloud -based orchestration; open APIs (e.g. OpenFlow)

ESTI NFV Industry Specification Group has been formed, with over 150 network operators participating.

Looking Forward: jInitial focus of operator NFV deployments will benefit their own network infrastructure and services; looking forward , operators plan to offer VMs at the network edge to third-party service providers in an IaaS model; 

Glenn Dasmalchi – Juniper Networks – Enterprise IT Transformation

The network is crucially important to what enterprises want to build. Enabling distributed use case, ie Hybrid Cloud. Not everyone will run to the public cloud, as it’s not always appropriate. A private dynamic dynamic information framework controlled on premise, coupled with the ability to reach out to the external services. 

For an effective cloud infrastructure to deliver value, the network is critical. It’s not just a single technology. A lot of attention within data centers being paid to “Fabrics”, and in transport layers to new technologies like NFV, PCE. End systems and servers also need to participate in the architecture. 

Role of the network – key enabler of non-functional requirements: Cost, Agility, Performance, Reliability, Scalability, Extensibility, Supportability, Interoperability, Visibility, Security.  

A lot of what we see happening in the network today is about making the network an agile platform. The value going forward is in the network providing the underpinnings for cloud computing. 

Internet2 Global Summit – I2 Network Update

Rob Vietzke, VP I2 Network Services

Abundant Bandwidth: 100G+ nationwide backbone;

Programmable Native OpenFlow w/virtual slices

Support for data-intensive science

Interconnected with public Internet

Full 18 months of solid experience

45 petabytes per month on the network, expecting to hit 50 pb/month this month.

fUpgraded first couple of links from 100 Gb to 200 Gb. 

New NEtwork Initiatives: NEREN/NOX for ME, VT, NH; Arizona Sun Corridor; CAAREN (DC area); NIH

19 connections upgraded to 100G this year, 18 new AL2S nodes

Innovation Platform Success Stories – 31 campuses, 10 regionals, 76 NSF CC-NIE awardees, I2 has done five “Operating Innovative Network” workshops

Big Data, Cloud, SDN, and HPC collaborations

Network analytics – I2 Deepfield Network Analytics allows designated member admins to analyze I2 use; Enhanced perfSONAR deployment assures network & cloud resources performance

TransitRail Peering Service – Strategic relationship with key content and network peers enables innovation. 

Global Partnerships – Global Content Delivery Pilot out of Singapore, With Duke, Chicago, Florida International and NYU.

Global Partnerships – Arabian Global Educational Open Exchange (AGE-OX) to open in Fujairah, UAE, led by national network Anakbut in Fujairah. 10G connectivity to I2 in Singapore and future links to I2 in Europe/Americs.

Internet2 Global Summit – Net+ General Update

Jerry Grochow, Internet2

Key point – Community started program, and it grows as the community has capacity to make it grow.

Campsu Members: identify challenges they need solved and propos offerings (InquireY) Initiat and support research; evaluate solutions and offerings; sponsor service validation; support broad adoption

Range of portfolio: Trust & Identity; Infrastructure; SaaS; Voice & Video; Digital Content

Introducing QuickStart program with goal of 45-60 day pipeline.

QuickStart Requirements: Identified sponsoring CIO; membership in I2 and InCommon; Shib/SAML (within 6 months); Connection to the R&E network (within 6 months); Completion of Net+ Cloud Control Matrix; Commitment to enterprise-wide offerings and best pricing’ Commitment to establish service advisory group within first 6 months and formal Service Validation; Acceptance of I2 Net+ template business adn customer agreement terms and the community BAA (for HIPAA) – with minimal negotiation; Offerings limited to 2 year renewable term and customer agreements between provider and consuming institution.

The goal is to have the service providers accept the Net+ agreements with as little modification as possible. Like to have service validations fit within 3-4 months. Some go longer, there are some that might go quicker. I2 is shifting things a bit to try and get as much work done before involving the universities, asking the providers to prepare the matrixes before hand. 

Formalizing regional and global partner programs. 

Net+ Governance:

  • Program Advisory Group – CIOs, regional network leaders, providing overall direction to the program.
  • Service Avisory Groups (per service) – service provider and university working councils, product and service review.
  • [NEW] Procurement Advisory Board – University chief procurement officers, linking NET+ to the procurement community’s needs. Internet2 joining National Association of Procurement Officers.

 

Internet2 Global Summit – Tuesday Keynote: Shirley Ann Jackson (RPI), Delivering on the Promise of Big Data and Big Discovery

Bill Hagan – Microsoft – Windows Azure – No egress charges and special pricing (Capex). Support for SAML 2.0. New Office API for O365 and Azure.

Shel Waggener – Net+ status

Net+ – 86 campuses have participated in at least one service validation. 300 campuses subscribe to services, 620 subscriptions. Tracking about $200 million in benefit.

Less than 50% of initial incubator, inquiry, and evaluation efforts reach Service Validation.

7.8 million identities in InCommon institutions. 425 Academic participants, 160 sponsored partners, 2000 registered service providers.

Dave Lambert 

I2 President’s Leadership Awards go to: Tracey Futhey (Duke), Kevin Morooney (Penn State), Brent Sweeney (Indiana). Richard Rose award goes to Pat Burns (Colorado State) .

Shirley Ann Jackson, President, RPI – Delivering on the Promise of Big Data and Big Discovery

We are generating enormous amounts of data. We expect networks to move this forward. In addition to scientific insights, we need to generate knowledge from the tremendous amount of web data – this may be the biggest challenge. Unstructured data generated by people, plus the data generated by sensors. There is work going on to create standards for sensor data. We analyze  less than one percent of the data we capture, even though the answers to many challenges live within that data. 

Jefferson Project, instrumenting and monitoring Lake George. 40 sensing platforms monitoring 25 variables, 188 million operations per second will take place when the supercomputer models the lake. 

Configurable networks and safe harbors for activity are increasingly important.

Volume and Velocity - Because of the mismatch of capacity between networks and supercomputers, researchers are still mailing data on disks. The mismatch is likely to grow as we examine important research questions and increasing scale. These challenges demand exascale computing. Greater intelligence upstream is part of the answer. RPI is using IBM Watson to analyze Lake George’s topographical features. Data challenges is also about data in motion. The question arises whether we can embed more artificial intelligence in the network itself to decide what data should be moved? Can we define software defined network to include cognitive abilities? Data is a realm where one investigator’s trash is another’s treasure. Junk DNA is a prime example, which is now seen as having significant information about diseases.

Variety – different groups of researchers collaborating in far flung collaborations. International collaborations will become even more prevalent. Researchers can’t identify which of the peers have which data, leading to duplication of effort, slowing down discovery. It should be easier to search metadata about datasets, tools, and the researchers who created them. We need a yellow pages for data.

Peter Fox (RPI) – Deep Carbon Observatory, is creating a portal to find information about data and tools. Down the road it’s possible to imagine Watson finding data relevant to specific research efforts.

Veracity - what is the provenance of the data, who is permitted to use it, correct it, add to it? Transparency about the tools will help. Can cognitive and semantic tools be embedded in the network to help with this? We are connected by our exposures and exposed by our connections. Important that greater resiliency be built into our networks. 

Walls between disciplines are crumbling – we are challenged in how we teach and research. In this era of big data and big science, universities must serve as the crossroads for this research – the new polytechnic. The most important networks in discovery and innovation are human. The greatest challenge all of us in academia face is fostering the right connections. 

Panel discussion with Shirley Jackson, Michael McRobbie (President, Indiana), and Philip DiStefano (Chancellor, Colorado, Boulder).

McRobbie – The days of Internet2 activities being unique is over – pretty much every area of scientific research is digital to a greater degree. Classicists are using vats amount of data as well as physicists – e.g. the reconstruction of Hadrian’s villa in Rome. We have to be a little humble in our understanding of researchers – they want to do their work and they want things to work, which has implications. There’s enormous imperative for Internet2 to keep things working at the cutting age. There’s no area of research that doesn’t take place in an international context – that’s been driven by the Internet. He feels particularly strongly about the preservation of knowledge, which hasn’t been given enough thought. Universities are the only institutions that have a hope of making that happen. We have an enormous responsibility to think about preservation of research data for centuries and millennia. 

DiStefano – To do interdisciplinary research one of the keys is technology at the core, with faculty from various disciplines working together. At UC trying to break down barriers and form interdisciplinary institutes.

Jackson – To solve problems in one discipline requires input from other disciplines. How do we encourage and incent people to come together, out of their comfort zone. Most researchers don’t want infrastructure that becomes a burden. We need intelligent data management culling as part of computational modeling of complex systems. 

McRobbie – Essential to ask researchers what they want. They may not be able to describe it at a technical level, but at least conceptually they can. At IU a breakthrough moment was when there were disputes at I2 about what researchers want, they got together a group of 15 researchers and asked what they want. What came out was the ability to store and preserve data. HPC and Networking were there, but being able to store data and be able to extract things out of it in the future. Need to build up capacity to store and manage tens of petabytes of data. 

Internet2 Global Summit – Globus Online

Steve Tuecke, University of Chicago

Big data transfer and sharing … directly from your own storage systems. 

How do you take your own storage systems to the Globus service which can orchestrate.

Use cases:

  • “i need a good place to store or backup my big research data at a reasonable price” Campuses are struggling with this – installing storage systems, putting Globus UI on top and put it in the Science DMZ.
  • “I need to quickly move or mirror portions of my data to other places, including my campus HPC system, lab server, deskop, laptop, XSEDE, cloud”.
  • ” I want to publush my data so that it’s available and discoverable long-term”
  • “I want to archive my data in case it’s needed sometime in the future”.
  • ” I need a way to easily and securely share my data with my colleagues at other institutions.”
  • Globus is SaaS

- Web, command line and REST interfaces. Globus never sees the data – data flows directly between the systems, with Globus orchestrating.

Globus connected resources on campus: Research computing center; department / lab storage; campus-wide home/project file system; mass storage systems; science instruments; desktops and laptops; custom web apps; Amazon S3

To a first order of magnitude all non-classified national labs and facilities are connected to Globus.

Globus is a non-profit servicce provider to the non-profit research community. 

Globus provider subscriptions:

  • Managed endpoints: Priority support, Management console, usage reports, mass storage system optimization, host shared endpoints, integration support.
  • Plus subscriptions: create and manage shared endpoints, personal transfers.
  • Branded Web Site skinning
  • Alternate identity provider (InCommon is standard)

 

Net+ Software as a Service Portfolio

Dana Voss, Net+ Program Manager

Blackboard – in Service Validation, UMBC, UT Arlington as sponsors, Cornell, Nebraska Lincoln and VCU have joined. Looking to have first phase completed by end of June in time for Blackboard World. 

Canvas – General Availability. University of Washington sponsored. Advisory board meeting monthly, and will met in June at InstructureCon. 

Kuali Coeus – offered by rSmart, sponsored by Portland State. Grant and research management. In Evaluation, looking for 2-3 more interested universities.

Desire2Learn – Early Adopter. Kicked off Service Validation in February, signed contract last week. Sponsored by University of Akron and University of Arizona. Validated 2 services: learning platform and lecture capture software. 

ICE Health Systems (Electronic Health Record System for dental schools), in Service Validation, sponsored by Michigan. Community helped provide requirements to enhance software to meet the need of dental schools. Hope to have it finalized by end of April. UNC CHapel Hill going live in September, Michigan and Pittsburgh by end of 2014. 

LabArchives – Lab notebook software, sponsored by Cornell. In Service Validation, kicked off in early April. Open to early adopters in Q3. 

One – Offered by Indiana University – their enterprise portal. Have kicked off service validation, looking for additional participation. 

Merit – Infrastructure platform and security as a service. Mail collaboration suite, virtual data centers, list manager, etc. In General Availability. 

ServiceNow – Service Validation, with Washington as sponsor. Things are on hold, but hope to resume soon.

Starfish – Student Retention/Advising – Evaluation, sponsored by Nebraska Lincoln. Automated student tracking, early alert, online appointment scheduling and assessment. 

Internet2 Global Summit – Chris Vein keynote

Chris is Chief Innovation Officer for Global Information and Communications Technology Development at the World Bank

World bank focused on two specific goals: End extreme poverty in the world by 2030 (people who live on $1.25 a day or less – several billion people); Boost shared prosperity of the bottom 40%.

4 degree centigrade change of world temperature will cause mass upheaval and migrations to cities. If that happens in the next 5-10 years we will see conflicts emerge, if not war. The Bank is putting $28 billion each year into lending to governments to build infrastructure, connectivity and innovation.

The world is at a tipping point. The world will experience a population of 9 billion people this century. You have to feed them – we will have to become 60% more efficient in how we create food. 4.5 billion people now don’t have access to proper sanitation. 1.3 billion don’t have electricity. 2.5 billion don’t have clean fuel to cook. We’re experiencing rapid urbanization,causing massive shifts, mostly in developing countries. Glimmer of hope: most people moving into cities are moving to cities of 500,000 or less.

Disruption – Cannot do things the same old way. How do you innovate with no resources? If you believe Cisco that there will be 50 billion objects connected to the Internet, and look at 6.8 billion cell phones in the world (more than toilets), you see that we have moved over the past 20 years to the idea of individual empowerment. Individuals are increasingly empowered to take greater control of how they live, work, and play.

Institutions as the platform – governments open up data and let other people create value around it. Huge for governments to let go and let people see.

Community is the capacity – let the community solve problems

Customer is the approach – governments have lost sight of who we serve. Forget that it’s all about the individual. We need to be worried about user-centered design. If you invite them in at the beginning it goes better, cheaper, faster.

We get so caught up in the procurement rules that we don’t let people who identify the problem with people who can create solutions. Entrepreneurship In Residence program in San Francisco – develope products and services for the public sector market.

Creation – In an analogue world, policy dictates delivery. In a digital world, delivery informs policy – Mike Bracken. Doing RFPs – take a year or two to solve a problem that may no longer exist or at a price you can’t afford. IF we iteratively build solutions, we can take it to the policy makers and say “here, this is what works”. UK government is leading the world in this. He shows a drawing of a new model for the World Bank innovation involving academia, citizens, and private sector.

Case study – Project in Tasmania called sense-t. Project of 39 research and educational organizations in Australia and New Zealand. Been gathering sensor data for 20 years, kept by the University of Tasmania. Intended to provide decision support tools for people in all sectors in the Australian economy. Example: oysters. THey put sensors on oysters , tracking respiration rate, to understand what warming and pollution are doing in water. Saving oyster farmers up to $150,000 per day because they’re no longer being shut down just in case. Imagine if we could track that oyster all through the supply chain. Could help provide safety and security of food supply.

How do we scale it?

Connect the next billion – do we create open networks? Do we work with Google on their fiber? Do we use balloons? Satellites? R&E networks are part of the solution.

Transform bureaucracy;

Innovate innovation itself.  - we think we know best. 9 times out of 10 we are wrong.


subscribe

Pages

Latest tweets

interesting links

What I’m listening to

April 2014
M T W T F S S
« Jan    
 123456
78910111213
14151617181920
21222324252627
282930  

Follow

Get every new post delivered to your Inbox.