title

Global Virtual Organizations for Data Intensive Science Creating a Sustainable Cycle of Innovation Harvey B Newman, Caltech WSIS Pan European Regional Ministerial Conference Bucharest, November 7-9 2002 Challenges of Data Intensive Science and Global VOs Geographical dispersion: of people and resources Scale: Tens of Petabytes per year of data Complexity: Scientic Instruments and information 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids Emerging Data Grid User Communities Grid Physics Projects (GriPhyN/iVDGL/EDG) ATLAS, CMS, LIGO, SDSS; BaBar/D0/CDF NSF Network for Earthquake

Engineering Simulation (NEES) Integrated instrumentation, collaboration, simulation Access Grid; VRVS: supporting new modes of group-based collaboration And Genomics, Proteomics, ... The Earth System Grid and EOSDIS Federating Brain Data Computed MicroTomography Virtual Observatories rids are Having a Global Impact on Research in Science & Engineering Global Networks for HENP and Data Intensive Science National and International Networks with sufficient capacity and capability, are essential today for The daily conduct of collaborative work in both experiment and theory Data analysis by physicists from all world regions The conception, design and implementation of next generation facilities, as global (Grid) networks Collaborations on this scale would never have been attempted, if they could not rely on excellent networks L. Price, ANL

Grids Require Seamless Network Systems with Known, High Performance High Speed Bulk Throughput BaBar Example [and LHC] Driven by: HENP data rates, e.g. BaBar ~500TB/year, Data rate from experiment >20 MBytes/s; [5-75 Times More at LHC] Grid of Multiple regional computer centers (e.g. Lyon-FR, RAL-UK, INFN-IT, CA: LBNL, LLNL, Caltech) need copies of data Data volume Moores law Need high-speed networks and the ability to utilize them fully High speed Today = 1 TB/day (~100 Mbps Full Time) Develop 10-100 TB/day Capability (Several Gbps Full Time) within the next 1-2 years ata Volumes More than Doubling Each Yr; Driving Grid, Network Need

HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Year Production Experimental 2001 2002 0.155 0.622 0.622-2.5 2.5 2003 2.5 10 2005 10 2-4 X 10 2007

2 -4 X 10 20 09 20 11 20 1 3 ~10 X 10; 40 Gbps ~10 X 10 ~5 X 40 or or 1 -2 X 40 ~20-50 X 10 ~5 X 40 or ~25 X 40 or ~100 X 10 ~20 X 10 ~Terabit ~MultiT bps Remarks SONET/SDH SONET/SDH DWDM; GigE Integ. DWDM; 1 + 10 GigE Integration ? Switch;; ? Provisioning 1 st Gen. ? Grids 40 Gbps ? Switch;ing 2 nd Gen ? Grids

Terabit Networks ~Fill One Fiber Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks AMS-IX Internet Exchange Thruput Accelerating Growth in Europe (NL) 8 Gbps 6 Gbps Monthly Traffic 4X Growth In 14 Months 8/01 10/02 4 Gbps 2 Gbps 10 Gbps Hourly Traffic 11/02/02 5 Gbps 0 ENP & World BW Growth: 3-4 Times Per Year; 2 to 3 Times Moores La National Light Rail Footprint SEA

POR SAC OGD SVL CHI DEN FRE LAX NYC CLE KAN PHO STR NAS PIT BOS WDC RAL NLR ATL

SDG Buildout OLG Starts DAL November 2002 Initially 4 10 15808 Terminal, Regen or OADM site Gb Fiber route Wavelengths NREN Backbones reached 2.5-10 Gbps in 2002 in Europe, Japan To 40 10Gb and US; Waves in WAL Distributed System Services Architecture (DSSA): CIT/Romania/Pakistan Agents: Autonomous, Auto- Lookup Discovery Service n tio ra

st gi Re discovering, self-organizing, Lookup collaborative S Service ervice Li sten Lookup Station Servers (static) host Rem er ote N otific Service mobile Dynamic Services ation Servers interconnect dynamically; form a robust fabric in which mobile agents travel, with a Station payload of (analysis) tasks Server Adaptable to Web services: OGSA; and many platforms Station Adaptable to Ubiquitous, Station Server Proxy Exchange Server mobile working environments Managing Global Systems of Increasing Scope and Complexity,

n the Service of Science and Society, Requires A New Generation f Scalable, Autonomous, Artificially Intelligent Software System MonaLisa: A Globally Scalable Grid Monitoring System By I. Legrand (Caltech) Deployed on US CMS Grid Agent-based Dynamic information / resource discovery mechanism Implemented in Java/Jini; SNMP WDSL / SOAP with UDDI Part of a Global Grid Control Room Service http://cil.cern.ch:8080/MONALISA/ History - Throughput Quality Improvements from US to World annual Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 80% improvement

Factor ~100/8 yr Progress, but the Digital Divide is Maintained: Action is Required NREN Core Network Size (Mbps-km): http://www.terena.nl/compendium/2002 100M Logarithmic Scale 10M Advanced 1M In Transition Gr 100k 10k Leading It Hu Fi

Nl Cz Es Ch Pl Ir Lagging Ro 1k Ukr 100 Perspectives on the Digital Divide: Intl, Local, Regional, Political Building Petascale Global Grids: Implications for Society Meeting the challenges of Petabyte-to-Exabyte Grids, and Gigabit-to-Terabit Networks, will transform research in science and engineering These developments could create the first truly global virtual organizations (GVO) If these developments are successful, and deployed widely as standards, this could lead to profound advances in industry, commerce and society at large By changing the relationship between people

and persistent information in their daily lives Within the next five to ten years Realizing the benefits of these developments for society, and creating a sustainable cycle of innovation compels us TO CLOSE the DIGITAL DIVIDE Recommendations To realize the Vision of Global Grids, governments, international institutions and funding agencies should: Define IT international policies (for instance AAA) Support establishment of international standards Provide adequate funding to continue R&D in Grid and Network technologies Deploy international production Grid and Advanced Network testbeds on a global scale Support education and training in Grid & Network technologies for new communities of users Create open policies, and encourage joint development programs, to help Close the Digital Divide The WSIS RO meeting, starting today, is an important step in the right direction Some Extra Slides Follow IEEAF: Internet Educational Equal Access Foundation;

Bandwidth Donations for Research and Education Next Generation Requirements for Physics Experiments Rapid access to event samples and analyzed results drawn from massive data stores From Petabytes in 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. Coordinating and managing the large but LIMITED computing, data and network resources effectively Persistent access to physicists throughout the world, for collaborative work Grid Reliance on Networks Advanced applications such as Data Grids rely on seamless operation of Local and Wide Area Networks With reliable, quantifiable high performance Networks, Grids and HENP Grids are changing the way we do science and engineering Next generation 10 Gbps network backbones are here: in the US, Europe and Japan; across oceans Optical Nets with many 10 Gbps wavelengths will follow Removing regional, last mile bottlenecks and compromises in network quality are now All on the critical path Network improvements are especially needed in SE Europe, So. America; and many other regions:

Romania; India, Pakistan, China; Brazil, Chile; Africa Realizing the promise of Network & Grid technologies means Building a new generation of high performance network tools; artificially intelligent scalable software systems Strong regional and inter-regional funding initiatives to support these ground breaking developments Closing the Digital Divide What HENP and the World Community Can Do Spread the message: ICFA SCIC, IEEAF et al. can help Help identify and highlight specific needs (to Work On) Policy problems; Last Mile problems; etc. Encourage Joint programs [Virtual Silk Road project; Japanese links to SE Asia and China; AMPATH to So. America] NSF & LIS Proposals: US and EU to South America Make direct contacts, arrange discussions with govt officials ICFA SCIC is prepared to participate where appropriate Help Start, Get Support for Workshops on Networks & Grids

Encourage, help form funded programs Help form Regional support & training groups [Requires Funding] LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1:2 Tier0/(S Tier1)/(S Tier2) ~1:1:1 ~PByte/sec Online System Experiment ~100-400 MBytes/sec Tier 0 +1 ~2.5-10 Gbps Tier 1 IN2P3 Center CERN 700k SI95 ~1 PB Disk; Tape Robot INFN Center RAL Center

Tier 2 FNAL: 200k SI95; 600 TB 2.5-10 Gbps Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center ~2.5 Gbps Tier 3 InstituteInstitute ~0.25TIPS Physics data cache Workstations Institute Institute 0.110 Gbps Tier 4

Physicists work on analysis channels Each institute has ~10 physicists working on one or more channels Why Grids? 1,000 physicists worldwide pool resources for petaop analyses of petabytes of data A biochemist exploits 10,000 computers to screen 100,000 compounds in an hour Civil engineers collaborate to design, execute, & analyze shake table experiments Climate scientists visualize, annotate, & analyze terabyte simulation datasets An emergency response team couples real time data, weather model, population data

[email protected] ARGONNE CHICAGO Why Grids? (contd) Scientists at a multinational company collaborate on the design of a new product A multidisciplinary analysis in aerospace couples code and data in four companies An HMO mines data from its member hospitals for fraud detection An application service provider offloads excess load to a compute cycle provider An enterprise configures internal & external resources to support e-business workload [email protected]

ARGONNE CHICAGO Grids: Why Now? Moores law improvements in computing produce highly functional endsystems The Internet and burgeoning wired and wireless provide universal connectivity Changing modes of working and problem solving emphasize teamwork, computation Network exponentials produce dramatic changes in geometry and geography 9-month doubling: double Moores law! 1986-2001: x340,000; 2001-2010: x4000? [email protected] ARGONNE CHICAGO A Short List: Revolutions in Information

Technology (2002-7) Scalable Data-Intensive Metro and Long Haul Network Technologies DWDM: 10 Gbps then 40 Gbps per l; 1 to 10 Terabits/sec per fiber 10 Gigabit Ethernet (See www.10gea.org) 10GbE / 10 Gbps LAN/WAN integration Metro Buildout and Optical Cross Connects Dynamic Provisioning Dynamic Path Building Lambda Grids Defeating the Last Mile Problem (Wireless; or Ethernet in the First Mile) 3G and 4G Wireless Broadband (from ca. 2003); and/or Fixed Wireless Hotspots Fiber to the Home Community-Owned Networks Grid Architecture Coordinating multiple resources: ubiquitous infrastructure services, appspecific distributed services Sharing single resources: Negotiating access, controlling use Talking to things: Communication (Internet protocols) & security

Controlling things locally: Access to, & control of resources Collective Application Resource Connectivity Transport Internet Fabric Internet Protocol Architecture Application Link [email protected] More info: www.globus.org/research/papers/anatomy.pdf ARGONNE CHICAGO LHC Distributed CM: HENP Data Grids Versus Classical Grids Grid projects have been a step forward for HEP and LHC: a path to meet the LHC Computing challenges

But: the differences between HENP Grids and classical Grids are not yet fully appreciated The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalable Analogous to the Web The classical Grid architecture has a number of implicit assumptions The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) Short transactions; Relatively simple failure modes HEP Grids are data-intensive and resource constrained Long transactions; some long queues Schedule conflicts; [policy decisions]; task redirection A Lot of global system state to be monitored+tracked Upcoming Grid Challenges: Building a Globally Managed Distributed System Maintaining a Global View of Resources and System State End-to-end System Monitoring Adaptive Learning: new paradigms for execution optimization (eventually automated) Workflow Management, Balancing Policy Versus Moment-to-moment Capability to Complete Tasks

Balance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs Goal-Oriented; Steering Requests According to (Yet to be Developed) Metrics Robust Grid Transactions In a Multi-User Environment Realtime Error Detection, Recovery Handling User-Grid Interactions: Guidelines; Agents Building Higher Level Services, and an Integrated User Environment for the Above Interfacing to the Grid: Above the Collective Layer (Physicists) Application Codes Experiments Software Framework Layer Needs to be Modular and Grid-aware: Architecture able to interact effectively with the Grid layers Grid Applications Layer (Parameters and algorithms that govern system operations) Policy and priority metrics Workflow evaluation metrics Task-Site Coupling proximity metrics Global End-to-End System Services Layer Monitoring and Tracking Component performance Workflow monitoring and evaluation mechanisms Error recovery and redirection mechanisms System self-monitoring, evaluation and optimization mechanisms DataTAG Project NewYork

ABILENE ABILENE UK UK SuperJANET4 It It GARR-B STARLIGHT GEANT GENEVA NL NL SURFnet ESNET ESNET vele a W ng a Tri STAR-TAP

CALREN CALREN Fr Fr Renater EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners Main Aims: Ensure maximum interoperability between US and EU Grid Projects Transatlantic Testbed for advanced network research 2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003) TeraGrid (www.teragrid.org) NCSA, ANL, SDSC, Caltech A Preview of the Grid Hierarchy and Networks of the LHC Era Abilene Chicago DTF ne: a l p k Bac

ps b G 0 4X1 Indianapolis Urbana Caltech San Diego UIC I-WIRE ANL OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Starlight / NW Univ Multiple Carrier Hubs Ill Inst of Tech Univ of Chicago NCSA/UIUC Indianapolis

(Abilene NOC) Source: Charlie Catlett, Argonne Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF) Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links 2001-2006 DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-2003 HENP As a Driver of Networks: Petascale Grids with TB Transactions

Problem: Extract Small Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) Summary: Providing Switching of 10 Gbps wavelengths within ~3 years; and Terabit Switching within 5-8 years would enable Petascale Grids with Terabyte transactions, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

National Research Networks in Japan SuperSINET Started operation January 4, 2002 Support for 5 important areas: NIFS IP Nagoya U HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs Nagoya Provides 10 ls: 10 Gbps IP connection 7 Direct intersite GbE links Osaka Some connections to Osaka U 10 GbE in JFY2002 HEPnet-J Kyoto U Will be re-constructed with ICR MPLS-VPN in SuperSINET Kyoto-U Proposal: Two TransPacific 2.5 Gbps Wavelengths, and

Japan-CERN Grid Testbed by ~2003 NIG WDM path IP router OXC Tohoku U KEK Tokyo NII Chiba NII Hitot. ISAS Internet U Tokyo NAO IMS U-Tokyo

National R&E Network Example Germany: DFN TransAtlantic Connectivity Q1 2002 2 X 2.5G Now: NY-Hamburg and NY-Frankfurt ESNet peering at 34 Mbps Direct Peering to Abilene and Canarie expected UCAID will add another 2 OC48s; Proposing a Global Terabit Research Network (GTRN) M T S 16 STM 4 STM 16 FSU Connections via satellite: Yerevan, Minsk, Almaty, Baikal Speeds of 32 - 512 kbps SILK Project (2002): NATO funding Links to Caucasus and Central Asia (8 Countries) Currently 64-512 kbps

Propose VSAT for 10-50 X BW: NATO + State Funding Modeling and Simulation: MONARC System The simulation program developed within MONARC (Models Of Networked Analysis At Regional Centers) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems. SIMULATION of Complex Distributed Systems for LHC Globally Scalable Monitoring Service st gi Re Lookup Service Farm Monitor RC Monitor Service Proxy

Component Factory GUI marshaling Code Transport RMI data access Farm Monitor I. Legrand Discovery n t io ra Push & Pull rsh & ssh scripts; snmp Lookup Service Client (other service) MONARC SONN: 3 Regional Centres Learning to Export Jobs = 0.83 CERN

30 CPUs = 0.73 1MB/s ; 150 ms RTT /s T MB RT 1.2 s 0m 15 20 0.8 0 MB m s /s RT T CALTECH 25 CPUs NUST 20 CPUs = 0.66 Day = 9 By I. Legrand COJAC: CMS ORCA Java Analysis

Component: Java3D Objectivity JNI Web Services Demonstrated Caltech-Rio de Janeiro (Feb.) and Chile Internet2 HENP WG [*] Mission: To help ensure that the required National and international network infrastructures (end-to-end) Standardized tools and facilities for high performance and end-to-end monitoring and tracking [Gridftp; bbcp] Collaborative systems are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. To carry out these developments in a way that is broadly applicable across many fields Formed an Internet2 WG as a suitable framework: October 2001 [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Secy J. Williams (Indiana) Website: http://www.internet2.edu/henp; also see the Internet2

End-to-end Initiative: http://www.internet2.edu/e2e Bucharest MAN for Ro-Grid Cat3550-24-L3 C7206 w Gigabit Victoriei C7513 w Gigabit Romana Cat4000 L3 Sw Gara de Nord ICI Palat Telefoane 100Mbps Universitate NOC 10/100/1000Mbps Unirii IFIN 1G link

1G backup link Eroilor Izvor RoEdu Network 2 Mbps-POP December 1, 2002 2 Mbps(backup) Satu Mare Botoani Baia Mare 8 Mbps Suceava GEANT connection 34 Mbps Oradea Iasi Bistria Zalu

Piatra Neam Tg-Mure Cluj Arad 155 Mbps Bacu Vaslui Mircurea Ciuc Timioara Alba Iulia Hunedoara Focani Sf. Gheorghe Sibiu Galai Braov Buzu

Trgu Jiu Rm.Vlcae Reia Ploieti Piteti Brila Trgovite Slobozia Slatina Tr. Severin Tulcea Bucureti Craiova Alexandria Giurgiu Clrai

Constana

Recently Viewed Presentations

  • PowerPoint Presentation

    PowerPoint Presentation

    NC Early Learning Network is a joint project of the NC Department of Instruction, Office of Early Learning. and. UNC . Frank. Porter Graham Child Development Institute. NC Early Learning Network . is a. joint project of the . NC...
  • Food Security and Hunger - De Anza College

    Food Security and Hunger - De Anza College

    jeopardy or need . repair. This is an extremely abbreviated slide. If you can, take a basic environmental science class at DA for an excellent background on the enviro problems we are facing . ... .rich nations use more. We...
  • First Grade Reading

    First Grade Reading

    Word Work = spelling, phonics, vocabulary. Words Their Way (Developmental Spelling Program) Flexible groups based on spelling stages . Goal is . transfer . of spelling patterns to writing . Weekly Word Work. Students will work in groups throughout the...
  • Ontario Universities On-line Application Presentation ...

    Ontario Universities On-line Application Presentation ...

    Course Changes will be reflected every time there is an upload into OUAC by the school board--i.e. sem. 1 finals; sem. 2 mid-terms; sem. 2 finals. FULL DISCLOSURE: If you need to drop a course after mid-terms, you have 5...
  • cs.ucf.edu

    cs.ucf.edu

    Overall, through this whole method, they measured that promoters with same intrinsic activity tend to be much less active in the LADs compared to iLADs. B - Three classes of Promoters.
  • Using the RE-AIM Framework nd Pragmatic Studies

    Using the RE-AIM Framework nd Pragmatic Studies

    Russ Glasgow via Cara Lewis via …. LINDA-Starting here lots of problems with not being able to rerad text. Working Definitions. Tabak RG et al. Bridging Research and Practice: Models for Dissemination and Implementation Research . Am J . Prev....
  • Southeastern Louisiana University Early Start

    Southeastern Louisiana University Early Start

    Students must have a high school GPA of 2.0 or above on a 4.0 scale. Students must have PLAN English and Math subscores of 15. Students must be in good standing as defined by the high school. Cost of Program....
  • PowerPoint Presentation

    PowerPoint Presentation

    Eric Cohen is a co-founder of XBRL and the chief architect of its initial standardization work in transactional and detailed data space: the Global Ledger (XBRL GL). He serves as a Domain Coordinator for the United Nations CEFACT Accounting and...