The addition of the client
based DB allows to address another sore spot of Solaris system administration.
The generally unstable quota systems are prone to destroying existing quota
tables or, in extreme cases, even crashing or severely impairing NFS servers.
This is a particular problem for large systems, where the number of users
places a huge burden on the quota system.
Quota Batching
RATS only makes this problem
worse, by adding a large number of quota operations to a system with many
file systems. Each
individual quota setting for each new user to each file system is self
contained and as such creates a large number of requests. The quota system
calls allow for the "batching" of operations by grouping set quota calls
for one file system followed by one single quota table flush. Currently
this is not used by RATS.
For example the following pseudo code illustrates the two behaviors in the case of setting quotas for three users on the same same file system.
In the current system we would do:
setquota(user1,quota1,fs);
flushquota(fs);
setquota(user2,quota2,fs);
flushquota(fs);
setquota(user3,quota3,fs);
flushquota(fs);
A better setup would do:
setquota(user1,quota1,fs);
setquota(user2,quota2,fs);
setquota(user3,quota3,fs);
flushquota(fs);
Unfortunately each individual RATS process only handles the operations required for one user and has no knowledge of what other process would do.
We intend to retain the current system for hosts or clusters which do not perform a large number of quota operations, and for which the batching may create unnecessary delays or even potential loss of quota information. However, for larger systems, RATS V2 will implement an alternative method. The creation of a client DB allows an easy method of batch quota setting.
This new method, transparent to the client tools, would not perform the quota set as soon as received, but instead will create a new entry in a dedicated table in the client DB. This entry would contain all the information needed to set the quota at a later time. At regular times ( the frequency of which needs to be determined based on host characteristics) a process running on the various NFS servers contained within the cluster will poll the database and obtain a list of outstanding quota set requests for the file systems it manages and performs a batch quota set for each file system. To avoid accidental overloading of the kernel internal quota table, a flush will be performed for every N rows during the process (where N is the number of rows between flushes and will vary system to system). For example the setting of 130 quotas on the same file system may take place like this:
setquota(user1,quota1,fs);
.
.
.
setquota(user50,quota50,fs);
quotaflush(fs);
setquota(user51,quota51,fs);
.
.
.
setquota(user100,quota100,fs);
quotaflush(fs);
setquota(user101,quota101,fs);
.
.
.
setquota(user130,quota130,fs);
quotaflush(fs);
Other quota improvements
In addition to the above
mentioned improvement, one other quota related problem needs to be addressed.
RATS V1 allows the system administrator a fairly limited number of quota
setting options. By this we mean that only two specific file systems have
quota values defined in the configuration file, with the allowance of a
generic quota set on all other file systems. RATS V2 will try to
rectify this by allowing any number of configurable quotas for any defined
file system. The development team is not yet ready to commit to this feature
as sufficiently elegant methods for implementation and configuration
have not been determined, but some effort will be made towards improving
on the current system.