A2C2 systems have a great deal of capacity, but CPU and Storage resources are limited.
Our full policy on CPU Time can be viewed here.
Our full policy on Storage allocation can be viewed here.
Allocations of CPU time on A2C2 computer systems are made via A2C2 Faculty Allocation Committee. Users may request time for up to 1 year. Committee allocations are made above and beyond the requestor’s base allocation. Approximately 2 million CPU-hours will be available for allocation. Refer to A2C2′s current allocation policy for more information.
To submit a proposal for CPU Hours, click here.
There are three tiers of allocations for CPU time:
Base Allocations are given to all ASU researchers who apply. For the 2007 fiscal year, the base allocation is 10,000 CPU-hours for Fulton School researchers (and other participating units) and 5,000 CPU-hours for all other ASU researchers. Allocations are tracked by principal investigator (i.e., a faculty members usage includes all of their graduate students and postdocs). A base allocation includes 10GB of home directory storage space.
Committee Allocations are granted through a proposal process to a faculty committee. Directions for the proposal process, click here. In FY08, the committee will allocate approximately 2 million CPU-hours. Requests can be for up to 200,000 additional hours beyond the base allocation. The committee judges allocation requests based on the research merit, the sponsored research supported, the potential for new research to be developed, and results from past allocations.
Purchased Allocations are additional CPU time a faculty member acquires through grants or charges. Purchases can take the form of direct charges for cycles or through contributions of hardware (there is flexibility in the type of charge to support most agency grant programs). A2C2 staff are happy to work with faculty to incorporate the most appropriate mechanism in research proposals. The rate for direct cycle charge for FY10 is $0.025 per CPU-hour.
To check the CPU hour allocation allotted to you, run
. The allocation will be listed by project.
CPU Hours are the number of CPUs for the job times the amount of time the job takes to run. All times are rounded to the nearest second. For example, if you have a job that takes 3 hours to run on 64 processors, you will have used 3*64=192 hours.
When you submit a job you must specify the amount of walltime you expect the job to take. The system will then check to see if you have enough hours, then reserve that number of hours, from our example 192 hours. When the run is completed, the actual amount of time used will be deducted and the reserve will be released. It is a good idea to leave a little extra time in your estimate. Note the word little, because any time that is reserved cannot be used on another job. For example, If you have 100 hours in your account and two jobs that have 100 CPU Hours estimated (number of processors * walltime), only the first job submitted will run, even if each job only uses 1 hour.
Jobs can run slightly over their walltime estimates before being killed.
If you have a job that is hard to estimate a walltime for (say a convergence problem), checkpoint the job so it can be restarted. As the job can be restarted from about where it left off, you can control how much time is reserved and not be worried about a particular sequence taking longer than expected.
If you are running code for a different project than your default, add
to your jobscript.
Storage is also available on a leased basis. Archive storage is available for a one-time charge of $1,500 per Terabyte for a 5 year period. High speed scratch storage or additional home directory space is also available.
Access to larger memory: Some jobs require larger amounts of memory than 1GB per processor. Larger memory amounts are available, but are deducted at a higher rate against the user’s allocations. Jobs requesting 2GB of memory per processor are charged at a 50% premium (1.5 CPU hours charged per actual hour used). A limited subset of nodes can support even larger memory… when these nodes are available, jobs may request up to 8GB/CPU at an additional 10% premium per GB above 2 (e.g., 4GB per CPU is charged at 1.7x the normal rater, 6GB per CPU at 1.9x).
Increasing Allocations: Allocations will be increased for jobs that allow additional flexibility. Starting January 1st, 2007, a Condor service will be available from A2C2. Jobs submitted to the Condor pool must be compiled against the Condor libraries, and may be preempted or moved to other clusters. Users will receive a 50% allocation bonus for submitting to this queue.
Your account gives you access to a limited amount of persistent disk space in =/home/yourusername= (your home space) that can be accessed from all the nodes. This space should be used for your jobs and data. The amount of disk space you are allotted is called your quota.
command tells you how much disk space you are using in the blocks column and how much you have left in the quota column. When you run out of space you will no longer be allowed to write to the disk and will need to remove some files before you can write again. Running out of quota can kill your jobs, so keep an eye on it.
In deference to not killing jobs, we allow you exceed your quota slightly for a short period of time. When that short period of time is over, writes will not be allowed until you are under your quota again.
The University Technology Office (UTO) and ASU Advanced Computing Center (A2C2) are proud to announce the ASU research
storage service. This service is designed to provide affordable,
reliable, secure, large-scale tiered storage to all ASU researchers for
day-to-day performance of research work or large-scale archives.
Windows, Unix/Linux, and Macintosh clients will be supported.
What is this for? This system is designed to support storage for research data. The service can be used for archives of large datasets, digital library applications, or day-to-day use for students and researchers.
How do I access the storage? On Windows-based systems, you can access your storage as a Windows shared drive. To attach the storage to your system
Windows will now map the storage space as a drive. The space is now accessable like any other file on the system. On Mac OS X systems you can access the storage through Finder.
On Linux or Unix based systems, you can access your space through Samba, or for servers in managed machine rooms via NFS. NFS mounts work normally, but some extra configuration will be needed for storage space with CIFS and NFS exports.
Samba mounts are done with the CIFS filesystem. The share will have the permissions of whoever’s credentials were used to mount it. This does not work like a traditional NFS mounted system. Each user will need their own mount point and will need to supply their password at the time of the mount. The mount command is:
mount -t cifs -o username=USERNAME,workgroup=asuad //SERVERNAME/SHARENAME MOUNTPOINT
Replace USERNAME with the users’s ASUAD username and change the share address we give you from \\SERVERNAME\SHARENAME to
//SERVERNAME/SHARENAME. Due to the limitation that the mount only acts as one user, each user will need their own, self-mountable, mount points. These can be setup in /etc/fstab with a line like the following for each user:
//SERVERNAME/SHARENAME /home/UNIXUSER/storage cifs username=ASUADUSERNAME,workgroup=asuad,user,noauto 0 0
Coming soon: We are looking into NFS4 based exports for unix or Linux systems outside of managed machine rooms. We are also looking into OpenAFS access for Windows, Mac or Unix clients.
What makes this reliable? All of your files will be stored on redundant disks as a protection against hardware failures. Once a day, “snapshots’ of the filesystem are made to protect against files being accidentally deleted. Finally, a full copy of the filesystem is made to a second complete storage system in another building several
times a week, to protect against disasters.
What is tiered storage? The storage system is divided into two tiers; the top tier is made up of high-speed fibre-channel disk drives. The second tier consists of high capacity SATA disk drives. A storage virtualization system makes the tiers appear as a single space to the user, but transparently moves the most used files to the fastest disks, and balances loads across the many fileservers that comprise the full system
How do I get access? Contact the ASU Advanced Computing Center at email@example.com.
How do CIFS/NFS shares work? The CIFS share work like any other CIFS share. Permissions are handled in the usual way. The NFS view of the share uses the CIFS permissions to grant/deny access, though the unix permissions do not look like they should. DO NOT change permissions of files through the NFS export (i.e, DO NOT use chown, chgrp, or chmod), only do that through the CIFS export. Besides permissions, NFS shares work as expected. Permission mapping is done by
unix id number, so we need to setup this mapping before you can use an NFS export of your share.
Saguaro is accessed using ssh, a secure system that provides interactive shell (command line) access, file transfers, and X-Windows tunneling. Linux, UNIX, and Apple OS X systems include an ssh client named ‘ssh’. Windows users must install a client. Suggested clients are ASU site-licensed client (then search for ssh), PuTTY, or OpenSSH.
Shell Access is accomplished by connecting to the login node from your system with a command like
or using a graphical client.
File Transfers can be done using
works like the
commands, but allows you to copy files to a remote system. To copy a file to Saguaro using
, use a command like,
replacing filename with the name of the file to copy, and username with your username. Please notice the ‘:’ at the end of the command, this is what tells
that you are copying to a remote system.
With this command files will be copied to your home directory on Saguaro. To copy to other directories, append the directory path to the command line after the ‘:’. For example,
will copy files to the
directory in your home directory. If the command were
the file will be copied to
on Saguaro. This is probable NOT what you want, so be careful.
is used like
, but over the secure ssh connection. Many people prefer graphical
clients for transferring files.
Graphical Access with X-Windows can be done by using ssh to
tunnel the X-Windows session back to your workstation. Linux and UNIX
systems use X-Windows for their graphical environment. Apple OS X users
will need to install and run the X11 program that comes with OS X.
Windows users will need to install a X-Windows server like Exceed, the
ASU site-licensed one available through http://myapps.asu.edu/. Alternatively, there is a free server called Xming.
Once the X-Windows server is running, just
to connect, then run the application you need. If your workstation is
properly configured, the application will then appear on your desktop.
A good application to test with is
, which provides a shell in a window.