Argonne National Laboratory

GM/CA @ APS

Computing & Tools

Department of Energy Office of Science
GM/CA @ APS Sponsors:
National Institute of General Medical Sciences (NIGMS) and National Cancer Institute (NCI) of the National Institutes of Health (NIH)
 

Hardware

GM/CA @ APS computing environment has similar structure for all three beamlines: 23-ID-D, 23-ID-B and 23-BM. The two ID beamlines which are equipped with fast Pilatus3-6M and Eiger-16M detectors share ultrafast 576TB storage with BeeGFS distributed file system. The BM beamline equipped with Rayonix-300 CCD detector provides a 57TB shared storage with NFS file system. The computing infrastructure of the ID beamlines clusters is shown on the picture below.


 

Users who log on the beamline workstations have their home directories on the storage array and thus all the workstations access the same home directory for a given user account. The storage capacity allows us to keep users data for two weeks after the experiment.

Computers at the ID beamlines are connected to internal 56Gbps fiber network and the workstations accessible from outside the lab (ws2, ws5, and ws6) are connected to 10Gbps uplink. The BM subnet is on 1Gbps fiber network.

Users are provided with two groups of workstations. One group is allocated for collecting and processing data on the day of experiment (day-1 workstations). These are blXws1, blXws2, blXws3, blXws6, and blXkeithley, where "X" stands for the beamline number ('1' for IDD, '2' for IDB, and '3' for BM). The other group (day-2 workstations) consisting of ws4, ws5, and ws7 is offered for those users who wish to continue processing or backing up their data after the experiment is over. At the ID beamlines there is one more group of workstations (ws8, ws9, ws10, ws11) which do not have monitors and keyboards for user access. They are used for automatic data processing from JBluIce.

All computers operate under CentOS-7, a freeware clone of Redhat Enterprise Linux operating system, and MATE graphical desktop environment. All of have a number of crystallographic data processing software installed including HKL3000/HKL2000, Phenix, PyMol, and etc.

The following computing policies are implemented:

  • The account management is centralized and all workstations access the same home directories that actually reside on the storage array.
  • No disk quota is enforced on user account.
  • All workstations are provided with USB3 connectivity (which accepts USB2 as well, but the later is highly unrecommended because of low speeds). Some workstations also have eSATA ports. Users are encouraged to bring their external drives for making data backups. More information about backups to external drives is provided on the data management webpage.
  • Users can remotely download collected data to their institutions using the GMCA Globus servers.
  • Users can also SFTP out their data. The speed rate may vary on the route to user's institution. The best expected rate is about 7-8MB/s. Due to tight ANL security restrictions, option to SFTP in, i.e. to access data from user's home institution is only available with ws5 for the day of experiment plus one day. Please inquire your host if you need such access.
  • Users laptops using ANL Wifi can connect to ws2, ws5, and ws6 via SSH, SFTP or NOMACHINE protocols. The Wifi connection also provides access to outside Internet resources like web pages and e-mail.
  • GM/CA @ APS stores users data for two weeks from their experiment start date. During this period users are expected to verify that their backup was successful and data was safely delivered to their home institution. After two weeks an e-mail is sent to remind users about scheduled deleting of their data and then in two days the data is automatically deleted from the GM/CA storage array.
  • User accounts are automatically disabled one day after their experiment start date. If you need to extend your access to day-2 workstations or to the Globus servers, please send request to your host who will arrange a temporary exception. Permanent exceptions are not possible.
  • Remote access using NoMachine technology is possible. For additional details click here.

 


GM/CA @ APS is an Office of Science User Facility operated for the U.S. Department of Energy Office of Science by Argonne National Laboratory

UChicago Argonne LLC | Privacy & Security Notice | Contact Us | A-Z Index | Search