RATS: The Big Picture

RATS was designed with some basic goals and intentions. The most fundamental goal was that it permit people to create accounts on systems like makeacct and classact had in the past. Another major goal was to replace all the aging account creation tools and anything dependant on the old whitepages before January 1, 2000. Given the time constraints, one thing that became apparent was that we were not going to get an accurate list of all the tasks the previous tools had been performing before time ran out. This gave rise to our foremost intention: RATS must be flexible. Unfortunately this means that except on the most basic end user level, RATS is not very intuitive. Hopefully taking a look at the big picture will help clear this up.

First lets look at what R.A.T.S. stands for. It is an acronym of Rutgers Account Tools and Services.  This means that it is a conglomeration of tools and services to deal with user accounts at Rutgers. Since the majority of the user population has accounts on ICI or RCI machines, It was developed under Solaris. The tools of rats currently focus on providing a means of creating and maintaining user accounts from both the end-user's and administrator's perspective. The services RATS provides are essentially UID space sharing between clusters, and maintaining a unified username space at Rutgers.

Unfortunately, if you think about it, RATS is very much like the description above. It is fairly easy to understand in an abstract manner from a very high level, but if you think about all the details it entails and try to understand them all at once, it will drive you crazy. So in order to brace you for the rest of the documents lets start with the simplest explanation of RATS. Whenever RATS is used, there must be at least three functioning components. First, you must be using a tool that works with the RATS API. Second, you must have a client daemon for RATS running on the client side system. Third, you must have a RATS server daemon running on the back end because the majority of tasks need access to centralized data such as PDB information, or kerberos services. The client tool will talk to both daemons, and they will do all the work. The tool basically just holds the algorithm for doing a job. It is a skeleton of API calls to the daemons to get the job done with a little logic to hold the skeleton together.

First lets look at how the conversation between any two points works. It is a standard TCP/IP connection between the two components. The connection is encrypted using triple cbc_des encryption. It uses two strings that are 56 characters or less for keys. Internally these strings are converted to actual 56-bit keys. The key used in any given covnersation for encryption is the key pair of whoever initiated the conversation. This means that you have to get your keys in the config file of any machine you want to talk to. For example, the rats back end server, and whoever you are synchronizing UID space with.

Next lets examine the RATS back end (this is an unfortunate sentence, and worthy of only the cheapest kind of sarcasm. You're better than that.). Throughout the documentation it will be refered to as the RATS server or RATS back end. It is, in theory, always up. Since 90% of the usefulness of the RATS server comes from it accessing the PDB (that's the People Data Base, repository of all that is good in the way of student and employee demographic information). It is only useful if the PDB is up too. This also means that as far as RATS is concerned, the PDB is always right. To see what the PDB thinks, just point your browser to http://nicto ,where you will find links to all sorts of useful documentation and centralized tools. The RATS server also wraps a few key kerberos functions. So if nicto doesn't have permission to deal with the realm you use, you are going to have some problems. Basically, thats all you really need to know. If you want, you can read the API when it is published to get the nitty gritty details about what it can do in specific.

Now you come to the client daemons. The only client daemon a tool ever talks to directly is the master client daemon (also refered to as the primary client daemon). The master client daemon directs traffic amongst all the client daemons on the cluster to get all the account creation and deletion work done. If you only run one client daemon, then it just directs all the traffic to itself and takes care of all the tasks. It contains no logic about account creation beyond any given API call's atomic operation. All the daemons should use the same configuration file, and they should thus all know about each other's keys. Client daemons that are not primary may do peer to peer trafic direction though.

Then there is the tool itself. There isn't a whole lot to say about the tools without getting specific. They are mostly modular, and if you know what you are doing, you can customize them. They also do everything through the API. What this means is that if you really want a custom tool, you can probably chek out the API and write your own. Keep in mind though that some of the API calls lend themselves to abuse, and will be unavailable to you without permission of the RATS administrator. The RATS administrator will more than likely not give you permission to these particular calls without looking at the code at the very least. What this also means is that once the feature requests get overly specific and specialized, the RATS admin will more than likely hand you the API and a URL to some good web sites on PERL. If you REALLY know what you are doing you can fidlle with the code behind the api calls modularly too. But then you have to maintain it across releases yourself.

Now that you have all three parts roughly defined, we will outline what happens in a session of "doing something". First you crank up the tool. You enter some data that has to be authenticated against the PDB data, so the tool contacts the RATS server. The server recieves a connect from the tool, checks what machine is connecting to it, and selects the key for that machine from its config files. If the machine is actually who the RATS server thinks it is, the keys match, and the conversation works out well. Now lets say authentication went well. We now need to enact some change locally within the cluster. The tool talks to the client master daemon. If it is on the same amchine the keys work out as they are both pulled from the same file. If it is on another machine there is the same type of process as with the rats server. (check initiator's identity, pull the appropriate key, and see if it matches what the intiator thinks it does) It then carries out an atomic action on the system. The tool basically repeats one of these two events over and over within the framework of the tools algorithm until the task is complete. Every transaction occurs in this manner, and the way traffic is directed is through a number of settings in the config file.

That is the really big picture. It probably seems a bit too vague, but hopefully it gives you a framework in which to understand the rest of the documentation.