Performance measurement of data transfer services in MAP

IEEE Network ◽  
1988 ◽  
Vol 2 (3) ◽  
pp. 75-81 ◽  
Author(s):  
W.T. Strayer ◽  
A.C. Weaver
2018 ◽  
Vol 6 (1) ◽  
pp. 37-43 ◽  
Author(s):  
Deniss Brodņevs

Abstract Remotely piloted operations of lightweight Unmanned Air Vehicles (UAV) are limited by transmitter power consumption and are always restricted to Line-of-Sight (LOS) distance. The use of mobile cellular network data transfer services (e.g. 3G HSPA and LTE) as well as long-range terrestrial links (e.g. LoraWAN) makes it possible to significantly extend the operation range of the remotely piloted UAV. This paper describes the development of a long-range communication solution for the UAV telemetry system. The proposed solution is based on (but not restricted to) cellular data transfer service and is implemented on Raspberry Pi under Gentoo Linux control. The goal of the project is to develop a flexible system for implementing optimized redundant network solutions for the Non-LOS remote control of the UAV


2021 ◽  
Vol 251 ◽  
pp. 02041
Author(s):  
Ishank Arora ◽  
Samuel Alfageme Sainz ◽  
Pedro Ferreira ◽  
Hugo Gonzalez Labrador ◽  
Jakub Moscicki

In recent years, cloud sync & share storage services, provided by academic and research institutions, have become a daily workplace environment for many local user groups in the High Energy Physics (HEP) community. These, however, are primarily disconnected and deployed in isolation from one another, even though new technologies have been developed and integrated to further increase the value of data. The EU-funded CS3MESH4EOSC project is connecting locally and individually provided sync and share services, and scaling them up to the European level and beyond. It aims to deliver the ScienceMesh service, an interoperable platform to easily sync and share data across institutions and extend functionalities by connecting to other research services using streamlined sets of interoperable protocols, APIs and deployment methodologies. This supports multiple distributed application workflows: data science environments, collaborative editing and data transfer services. In this paper, we present the architecture of ScienceMesh and the technical design of its reference implementation, a platform that allows organizations to join the federated service infrastructure easily and to access application services outof-the-box. We discuss the challenges faced during the process, which include diversity of sync & share platforms (Nextcloud, Owncloud, Seafile and others), absence of global user identities and user discovery, lack of interoperable protocols and APIs, and access control and protection of data endpoints. We present the rationale for the design decisions adopted to tackle these challenges and describe our deployment architecture based on Kubernetes, which enabled us to utilize monitoring and tracing functionalities. We conclude by reporting on the early user experience with ScienceMesh.


2021 ◽  
Vol 251 ◽  
pp. 02023
Author(s):  
Maria Arsuaga-Rios ◽  
Vladimír Bahyl ◽  
Manuel Batalha ◽  
Cédric Caffy ◽  
Eric Cano ◽  
...  

The CERN IT Storage Group ensures the symbiotic development and operations of storage and data transfer services for all CERN physics data, in particular the data generated by the four LHC experiments (ALICE, ATLAS, CMS and LHCb). In order to accomplish the objectives of the next run of the LHC (Run-3), the Storage Group has undertaken a thorough analysis of the experiments’ requirements, matching them to the appropriate storage and data transfer solutions, and undergoing a rigorous programme of testing to identify and solve any issues before the start of Run-3. In this paper, we present the main challenges presented by each of the four LHC experiments. We describe their workflows, in particular how they communicate with and use the key components provided by the Storage Group: the EOS disk storage system; its archival back-end, the CERN Tape Archive (CTA); and the File Transfer Service (FTS). We also describe the validation and commissioning tests that have been undertaken and challenges overcome: the ATLAS stress tests to push their DAQ system to its limits; the CMS migration from PhEDEx to Rucio, followed by large-scale tests between EOS and CTA with the new FTS “archive monitoring” feature; the LHCb Tier-0 to Tier-1 staging tests and XRootD Third Party Copy (TPC) validation; and the erasure coding performance in ALICE.


2020 ◽  
Vol 245 ◽  
pp. 07053
Author(s):  
Marian Babik ◽  
Shawn McKee ◽  
Pedro Andrade ◽  
Brian Paul Bockelman ◽  
Robert Gardner ◽  
...  

WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with WLCG, is focused on being the primary source of networking information for its partners and constituents. It was established to ensure sites and experiments can better understand and fix networking issues, while providing an analytics platform that aggregates network monitoring data with higher level workload and data transfer services. This has been facilitated by the global network of the perfSONAR instances that have been commissioned and are operated in collaboration with WLCG Network Throughput Working Group. An additional important update is the inclusion of the newly funded NSF project SAND (Service Analytics and Network Diagnosis) which is focusing on network analytics. This paper describes the current state of the network measurement and analytics platform and summarises the activities taken by the working group and our collaborators. This includes the progress being made in providing higher level analytics, alerting and alarming from the rich set of network metrics we are gathering.


Author(s):  
M.F. Schmid ◽  
R. Dargahi ◽  
M. W. Tam

Electron crystallography is an emerging field for structure determination as evidenced by a number of membrane proteins that have been solved to near-atomic resolution. Advances in specimen preparation and in data acquisition with a 400kV microscope by computer controlled spot scanning mean that our ability to record electron image data will outstrip our capacity to analyze it. The computed fourier transform of these images must be processed in order to provide a direct measurement of amplitudes and phases needed for 3-D reconstruction.In anticipation of this processing bottleneck, we have written a program that incorporates a menu-and mouse-driven procedure for auto-indexing and refining the reciprocal lattice parameters in the computed transform from an image of a crystal. It is linked to subsequent steps of image processing by a system of data bases and spawned child processes; data transfer between different program modules no longer requires manual data entry. The progress of the reciprocal lattice refinement is monitored visually and quantitatively. If desired, the processing is carried through the lattice distortion correction (unbending) steps automatically.


Sign in / Sign up

Export Citation Format

Share Document