QL-USERS MAILING LIST EDITED DIGEST

The ql-user mailing list is a way for QLers to keep in contact, ask questions,
get them answered, and generally shoot-the-sh*t.  Most of the traffic on the
mailing list is short and ages rather quickly.  Occasionally there are some
more usefull bits of information and longer discussions.  This file contains
the more usefull bits from the mailing list, as seen by me, Tim Swenson.

The discussions have been edited to avoid printing the same information twice
and for clarity.  Where possible the name of the author has been kept.

The items are in no particular order, but like discussions have been kept 
together.

(Last Update - 4 Aug 98)

TABLE OF CONTENTS

1.  Threads, Multiprocessing, & Deadlocks
2.  GoldFile Hardware Specifications 
3.  Aurora/SGC/GC Electrical Issues
4.  Aurora Serial Ports
5.  Emulator Speed Comparisons
6.  Benchmarks
7.  Power Saving Monitors
8.  Drivers & New Hardware
9.  Prowess
10. Directories
11. 
12. QBIDE Partitions
13. Hash Tables
14. IPC 8049
15. Paramater Passing
16. QDOS File Headers
17. Semaphores
18. Proccess & Threads
19. Interrupts
20. Deadlocks
21. Number of Device Names
22. Device Drivers
23. Screen Drivers and Loading Screens
24. New Filing System
25. New Facilities
26. File Name Parser
27. More File System Stuff
28. Aurora Colors


1. THREADS, MULTIPROCESSING & DEADLOCKS

Date: 	Thu, 30 Apr 1998 17:35:25 +0200
From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] Remember your QL?

[Threads, multiprocessing and deadlocks]
>
>  I think the original poster would like the question considered as if SMSQ
> was working on a multi-processor system. In such a case, this situation
> might occur. Presumably what would happen is that the device would send a
> 'free' reply, meaning that
> both jobs would receive a positive answer. However, the jobs would respond
> to this reply depending on the work-load on their paricular processor.
> Therefore, only if all tasks were completed at exactly the same time would
> there be a problem. If one processor is doing slightly more (or less) work
> than the other, then the jobs would end up sending information at
> different times, by which time, the other job should have allocated the
> device to itself.
> 
Just to put in my two pence worth...

Any multi(insert tasking/threading/processing as appropriate) system is
prone to deadlock. The big question is how dangerous it is.
Consider the following example:

Job 1:
Open channel to port 1
Repeat loop
  if port 2 not in use open channel to port 2, exit loop
End repeat loop

Job 2:
Open channel to port 2
Repeat loop
  if port 1 not in use open channel to port 1, exit loop
End repeat loop

This is a classic example of deadlock.
Depending on how the jobs get to execute, either possibility is open -
one job opens both channels, or both open one channel and end up waiting
for each other in the respective loops.
On multitasking/multithreading machines with a single CPU the result can
be made highly predictable depending on how the port allocation is
handled - for instance, a job might be denied allocating a port after an
unsuccessful allocation for a certain length of time, this is usually
pseudorandom but has a lower and upper limit which can be configured. I
won't go into the maths of things but this signifficantly reduces the
possibility of deadlock because it is highly unlikely both jobs will
wait the same rendom amount of time and end up in a sort of 'direct
competition'. For the interested, looking up the operation of collision
in Ethernet networks might be interesting, if at first glance nonrelated
reading :-))

The above is of course much more difficult to predict in a multi-CPU
machine, the reason being different things are happening on different
CPU 'domains' (what this is depends on the construction of the machine,
for instance in a networked multiple CPU environment the hardware local
to a networked box with one CPU might be the sole responsibility of that
particular CPU, i.e. it's a somewhat separated domain), and the other
CPUs might not have any way of knowing about them.

How does this apply to SMSQ/E, even in a yet hypothetical multi-CPU
version?
First of all, under SMSQ/E the various jobs/threads (there are books on
the distinction between those that don't really end up explaining much
:-)) ), rely on the OS core to handle allocating of ANY resource. If you
want to go around TAS-ing some nice looking addresses, you will get
what's coming and that's an eventual crash.
Even in multiprocessor machines it is VERY easy to avoid any way of a
'semaphore' type resource lockout to be impossible to fool by things
happening simultaneously - the answer is, they don't, not unless you are
willing to spend a LOT of money to implement true multiport memory. In
practice, the 'negotiation' over a resource is handled either over
shared RAM which is shared on a cycle by cycle basis (say, one cycle for
one CPU, the next for the other, then back to the first, etc) in which
case it is really easy to set up a deadlock-free lockout mechanism, or
it is handled by some communications interface in which case the CPU has
to check for data that arrives to it, and hence decide which request
gets granted in software.
On the 'regular' SMSQ/E a request for a resource involves finding out
weather it is in use or how much of it (like RAM) can be found that is
free to be allocated, and either invlove parsing through various system
tables and linked lists. SMSQ/E is not a separate 'thread' or 'job' but
rather, is called as a part of the calling thread/job, which is why it
has to be re-entrant (or at least most of it). Obviously, if several
jobs/threads contend for a resource, what will happen will be
unpredictable unless the 'resource allocating' call is atomic. This
means, that it cannot be interrupted by another job, i.e. while the call
is performed and the tables searched, the job scheduling is suspended
and only the job that has managed to execute the call as a part of it's
thread continues operating. In this manner, resouce allocation is
strictly ordered in a first-come first served fashion and is definitely
purely sequential. If this way of operating is maintained in a multi-CPU
environment (by communication between CPUs to insure controlled access
to shared tables), it also retains it's property of a 'controlled'
deadlock - one that in occuring, does not stop the machine - you could
easily do JOBS and remove the offending job(s) or even have a background
job monitor for long job suspended times and prompt you for job removal.

Problems tend to surface with jobs trying to set up their own scheme of
resource allocation between themselves, by means that are unchecked by
the OS. The propper way to handle this and still retain control is to
set up a 'guardian' job to handle the decision making, in which case
that is also in a way atomic from the standpoint of all the other jobs
that use the 'arbiter' job. This is what the client-server stuff does in
PBOX.

Just a wordon MMU hardware - it would be possible to use this on SMSQ/E
and in fact, providing the basic 'ask the OS' stuff is maintained, would
be simpler for SMSQ/E than some other OSs I know of, meaning even a very
primitive MMU would do fine. In actuality, memory management and
protection are a very complex matter in multi-CPU machines with shared
memory as parts of it can belong to multiple threads. But MMU stuff is
definitely for another message, this is very long as it is...

Nasta

Date: 	Fri, 01 May 1998 14:49:00 +0200

Joachim Van der Auwera wrote:
> 
> Some things aren't quite 100% as it should be either. As you mention some OS
> calls are atomic, so they always execute immediately. However, this is not
> always correct
...
> Note, I think ALL IO traps are NOT atomic. All routines which require you to
> set a timeout value are not atomic. However, you can make them atomic by
> using a zero timeout.

Hmm, not really, depends on how you define atomic. From the standpoint
of the application they are - the execution of a particular job does not
cross the called routine boundaries as long as the timeout has not
expired. The routine itself is not atomic, but, it's action is as for
it's deadlock preventing properties.
In retrospect I have to correct myself as to how the atomic part is
insured - my explanation in one of the previous messages was not
precise. The surest way is of course to disable interrupts. The job
scheduler is invoked by the polling interrupt so it will not be invoked.
In order to do this, you have to go to supervisor mode. Of course, other
things might be ging on that are interrupt invoked and you might end up
stopping them too, causing some unpredictable behaviour. There are
warnings in the manual that talk about this! (I should include an
explanation of the difference between job and task in SMSQ terminology,
and there is a signifficant difference...)

The other way is to rely on the scheduler. Timeouts under SMSQ are not
job priority related. The timeout value is actually a minimum, not a set
value. While the called routine is executing the scheduler usually does
run, and in fact other jobs might end up working. However, some routines
or parts ofthem literally are atomic and do suspend the scheduler. The
timeout will just cause the call to return a not_complete if the timeout
expires before the operation is complete, but in practice this means the
atomic part of the operation hasn't started, it wasn't interrupted mid
way.
Only jobs that use atomic routines that can contend over a resource, and
set a timeout of 0, can end in a deadlock which is uncontrollable by the
jobs themselves (NOT by the system). The testing for resource
availability part is indeed atomic but the waiting part for the timeout
isn't, it does not end up in a loop doing TAS/BNE instructions but if it
finds the resource is in use, it does the next test on the scheduler
loop call and decrements the timeout on the next poll interrupt and it
returns with either operation complete or not_complete (timeout
expired), at least that's how I understand it from my limited experience
and the literature.

Now as to the difference between jobs and tasks. (I must confess that my
own usage of the terms isn't very consistent...)

Jobs are programs which are executed in time slices governed by the
operation of the scheduler. The scheduler is invoked by poll interrupt
(normally 50 times a second) OR by jobs being suspended by waiting for
IO. For instance, in a system where say, 2 jobs are running, and both
are waiting for keyboard input and none is forthcoming, the scheduler
gets called after each input request from a job where no input is
coming, hence more frequently than just every 1/50th of a second. This
in effect puts jobs waiting for input out of the priority calculation to
give other jobs time to execute - why would the waiting job's time slot
be used for executing waiting loops? Beleave it or not, some OSs don't
have this.
Tasks are NOT jobs. They are pieces of code invoked by exceptional
occurances in the system, such as polled interrupt, external interrupt,
or re-entry into the scheduler. They do not have a priority - they are
executed every time the relevant exception occurs (which also means it's
enabled), in an order determined by when they were linked into the
system. Tasks are in themselves NOT atomic unless they enforce it
somehow (by disabling further exceptions). Because of this, they may not
allocate system resources because the various allocating routines will
end up being non-atomic (they are called re-entrantly as parts of the
calling code). All allocations have to be set up for them by a job
before the tasks are put into operation (this putting into operation is
of course atomic). SMSQ is much more lenient in tracking tasks than
jobs, and it's self-cleaning properties are not as pronounced here, this
is because tasks usually cater for various IO related things, which are
not parts of jobs, but usually of system extensions. Interestingly
enough, they can be set up by any job.

Nasta

Date: 	Fri, 01 May 1998 14:49:06 +0200

Arnould Nazarian wrote:
> 
> Joachim Van der Auwera wrote:
> 
> > Some things aren't quite 100% as it should be either. As you mention some OS
> > calls are atomic, so they always execute immediately. However, this is not
> > always correct.
> > When you want to allocate memory, you can not pass a timeout value. However,
> > contraty to what you might think, these calls can take a very long time.
> > What happens if you want to allocate a large block of memory which is used
> > as write buffers for the slaving system. In that case, your request to get
> > some memory can, even though the "free" memory exists, take a long time
> > because the data which is there has to be written to a device first...
>
Which has to be handled corectly and intelligently. Slaving and memory
allocation are not really very compatible with one another as far as
allocation strategy. In fact, a clever enough slaving routine might
remove the slave block somewhere else in case it cannot be written to a
device, to create a contiguous memory block which is the preferred stuff
for memory allocation. But of course, this may generate more problems
than it's worth - like a hot potato, the offending slave block can end
up being tossed around in memory for a VERY long time. But there are
mechanisms to prevent or minimise this. The nice thing about SMSQ is
that while it might not implement them as it stands now, it does not
close itself to future implementations of this kind.

> Now this seems to be a major drawback of the QDOS/SMSQ philosphy of
> providing atomic routines to access shared resources (eg system variable
> tables...)

As I said, if you want to access system variables by multiple jobs (and
let me point out that system variables are SYSTEM, i.e. not shared
variables or resources, in fact they are not resources at all and if
there was an MMU you wouldn't be able to see them at all), then you get
what you deserve.
Poking around system variables is VERY bad practice.
 
> Let us suppose a system with many applications, one of them relying very
> heavily on memory allocation/deallocation, for example in a loop.

Yet another example of bad programming. SMSQ has a perfectly good
mechanism for catering for this - you allocate a larger lump of memory
and then institute a user heap in it. If you allocate small chunks of
memory all the time only to dealocate them, no system in this world will
save you from the consequence which is memory fragmentation. At best,
fragmentation will be low but allocating would take a lot of time
because after a few allocations the OS itself will defragment what you
have allocated on each successive allocation. Most MMU using OSs will do
this because an MMU doesn't have an infinite number of page entries to
cater for a million bits and pieces of memory of wildly differing size.

> As the
> atomic routines to do this are very long, even with a low priority, this
> particular job will take too much CPU time, and the overall response of
> the complete system will degrade very much, possibly under usability
> level...

No, SMSQ still has the priority mechanism. If the atomic operations
become long, and the time slot intended for the job is overdrawn, it
will automatically get suspended for a while on the next scheduler loop.
Yes, it might get slower than usual but that's what you get for
inefficiant programming. You cannot expect the OS to correct peoples
code.

> Any solution to this problem?
 
It's already there. But, as I said, the best way is to program for such
a speciffic use of memory allocation. Otherwise we might get an OS which
dabbles in all trades but is a master of none.
Incidentally, the exact allocation strategy outlined above is what I
have seen during my (long gone) C programming days - most C programmers
that I have met have no idea of what really happens to the memory when
they do MALLOC...

Nasta

Date: 	Fri, 01 May 1998 14:49:18 +0200

Arnould Nazarian wrote:
> 
> ZN wrote:
> 
>> Just to put in my two pence worth...
>>
>> Any multi(insert tasking/threading/processing as appropriate) system
>> prone to deadlock. The big question is how dangerous it is.
>> Consider the following example:
>>
>> Job 1:
>> Open channel to port 1
>> Repeat loop
>>   if port 2 not in use open channel to port 2, exit loop
>> End repeat loop
>>
>> Job 2:
>> Open channel to port 2
>> Repeat loop
>>   if port 1 not in use open channel to port 1, exit loop
>> End repeat loop
>>
>> This is a classic example of deadlock.
> 
> No, not in QDOS / SMQ...

Try it and see :-)

> Job 1:
> (ask if channel to port 1 available
>    and if OK open it)[atomic]
> if port 1 not available
>    foresee smthg else to do
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
But this part is not atomic hence a context switch can occur and this
job might be suspended by the scheduler for job 2 to start executing. By
the time job 1 gets to execute again, job 2 might reserve port 1.
Notice, I said MIGHT.
>
> (ask if channel to port 2 available
>     and if OK open it)[atomic]
> if port 2 not available
>    foresee smthg else to do
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And again this is not atomic so Job 2 might execute to the point where
port 2 has been taken.
As I said, the key word is might - depending on a lot of things, how the
jobs were started, what their priority was, what timeouts have been used
and what other jobs are executed, one of 3 possibilities might occur:
1. Job 1 opens both channels, job 2 waits indefinitely (or doing
something else, which rises the point of, what if it doesn't have
anything to do) trying to open port 2.
2. Job 2 opens both channels, job 1 waits just like job 2 did in the
previous case.
3. Job 1 opens port 1 job 2 opens port 2 and the are in deadlock waiting
for port 2 and 1 respectively.
Even cases 1 and 2 are in a sense deadlock.

> Job 2:
> (ask if channel to port 2 available
>     and if OK open it)[atomic]
> if port 2 not available
>     foresee smthg else to do
> (ask if channel to port 1 available
>     and if OK open it)[atomic]
> if port 1 not available
>     foresee smthg else to do
 
> I have never really programmed, but that is what every programmer does
> in SMSQ to open channels (I have always read the Simon Goodwin series in
> QL User/World).
> Where do you see a deadlock?

I trust I have explained better this time.
 
> And in fact, if I understand well the end of your posting, it is also
> your conclusion that SMSQ does not allow deadlocks...
> ... which is maybe why this subject was not raised during the last 15
> years in the QL world?

Yes it does, but the deadlock does not involve SMSQ in it so that the
other non-deadlocked jobs are stopped. In effect, the deadlocked jobs
can be removed or suspended. The OS deadlock talked about in multi CPU
systems is the one that occurs when the [atomic] part ceises to be
atomic. In a single CPU environment such as the current SMSQ the only
way a context switch can occur is by interrupt or other type of
exception (although this is in fact limited under SMSQ). SMSQ ensures
interrupts do not occur by disabling them while deadlock sensitive
things are going on, thus making those things atomic. In a multi CPU
system, this is not enough - disabling an interrupt on one CPU means
nothing to the other(s). Therefore something more has to be done. In
practise, this is handled in several ways, either by delegating only one
CPU to handle allocating of system resources (in effect, only one CPU is
able to write system tables all others only read them) or the CPUs
employ a secure semaphore type protocol to decide which one is to handle
the atomic operation. This is notmally done in systems with shared
memory and IO resources. In networked systems, the CPUs are separated by
a network which is in itself a resource, so no deadlock can occur with
CPUs at either end of the network performing atomic operations.

> Again, if you look at what happens in other environments, especially
> 'multithreaded Unices', this subject is very hot!

This is because one of the usual way of migrating a
multitasking/threading system to a multiprocessing one is to distribute
tasks/threads on different CPUs. In this case as I explained above, you
cannot rely on operations on one CPU being atomic any more.
 
> I do not really want to follow you into the field of multiprocessing,
> even if I wonder why the above strategy of (question the OS and allocate
> resource if free)[atomic] would not be valid in a multiprocessing
> environment? But here there is maybe a problem of having the right
> hardware architecture?

The strategy of SMSQ is valid even in a multiprocessing system. However,
ensuring that the resource allocation is atomic is infinitely more
complex than just disabling the scheduler or interrupt. In some hardware
structures this may be simpler than in others, but it is possible. The
nice thing about the SMSQ philosophy is that it's an already established
one from the standpoint of a SMSQ programmer, so if you are used to
programming in SMSQ you won't have to do anything out of the ordinary.
What would happen, though is, that the atomic calls might get longer in
duration since additional negotiation between CPUs has to be catered
for.

Nasta

Date: 	Sat, 2 May 1998 21:33:31 +0200

Nasta wrote:

>Joachim Van der Auwera wrote:
>> Note, I think ALL IO traps are NOT atomic. All routines which require you
to
>> set a timeout value are not atomic. However, you can make them atomic by
>> using a zero timeout.
>
>Hmm, not really, depends on how you define atomic. From the standpoint
>of the application they are - the execution of a particular job does not
>cross the called routine boundaries as long as the timeout has not
>expired. The routine itself is not atomic, but, it's action is as for
>it's deadlock preventing properties.

ATOMIC routines are a QDOS term. A routine is atomic if it is impossible to
have a job switch during the OS call. When a zero timeout is used, the OS
test immediately whether the call is possible and return immediately if not.
In that case the schedular is never entered !

>The surest way is of course to disable interrupts. The job
>scheduler is invoked by the polling interrupt so it will not be invoked.

Not completely true. The schedular only works when the system was in user
mode when the interrupt occurs. The schedular is never entered when the
computer was in supervisor mode.

>Now as to the difference between jobs and tasks. (I must confess that my
>own usage of the terms isn't very consistent...)
>
>Jobs are programs which are executed in time slices governed by the
>operation of the scheduler. The scheduler is invoked by poll interrupt
>(normally 50 times a second) OR by jobs being suspended by waiting for
>IO. For instance, in a system where say, 2 jobs are running, and both
>are waiting for keyboard input and none is forthcoming, the scheduler
>gets called after each input request from a job where no input is
>coming, hence more frequently than just every 1/50th of a second.

Not true. In that case the processor will be idle. Both of the jobs will be
suspended until either the timeot expires or  a key is pressed.

>Tasks are NOT jobs. They are pieces of code invoked by exceptional
>occurances in the system, such as polled interrupt, external interrupt,
>or re-entry into the scheduler. They do not have a priority - they are
>executed every time the relevant exception occurs (which also means it's
>enabled), in an order determined by when they were linked into the
>system. Tasks are in themselves NOT atomic unless they enforce it
>somehow (by disabling further exceptions).

Careful here about the use of the word atomic. This is not the atomic as
discussed when talking about job handling and the schedular. However, a task
can only be interrupted by a task which relies on a higer priority
exception, which means that you normally never have to worry about the
possibility of the task being interrupted. As all tasks are interrupt
driver, they run in supervisor mode, so the schedular is never entered
meantime.

>Because of this, they may not
>allocate system resources because the various allocating routines will
>end up being non-atomic (they are called re-entrantly as parts of the
>calling code). All allocations have to be set up for them by a job
>before the tasks are put into operation (this putting into operation is
>of course atomic).

Not quite. Tasks have to finish quickly. Otherwise, the responsiveness of
QDOS is no longer guaranteed. Tasks are only supposed to push bytes into and
out of queues (this is what the docs say!)

>SMSQ is much more lenient in tracking tasks than
>jobs, and it's self-cleaning properties are not as pronounced here, this
>is because tasks usually cater for various IO related things, which are
>not parts of jobs, but usually of system extensions. Interestingly
>enough, they can be set up by any job.

Not true either. Tasks should only be set up as part of device drivers,
either permanently or temporarily. When they are permanent, the system
doesn't need to clean it up (sounds normal). When a task is temporary, it
has to be part of a channel (either real or simulated). When the job which
owns the channel is released, the task is also removed ! Note that "dummy"
(or simulated) channels which are not in the channel table can be used for
this !

Joachim


Date: 	Sun, 03 May 1998 22:50:22 +0200

At 21:33 02/05/98 +0200, Joachim Van der Auwera wrote:

>ATOMIC routines are a QDOS term. A routine is atomic if it is impossible to
>have a job switch during the OS call.

Not exactly. Under QDOS "ATOMIC" means that the routine enters supervisor
mode and only go back to user mode on completion. I admit that the difference
is subtil but it does exist (a routine could enter supervisor mode, disable
interrupts so that the scheduler is disabled and the go back user mode: this
is not an atomic routine as it gone user mode...).

>When a zero timeout is used, the OS test immediately whether the call is
>possible and return immediately if not.
>In that case the schedular is never entered !

Beware, some QDOS calls always exit through the scheduler (even with zero
timeout if applicable).

>>The surest way is of course to disable interrupts. The job
>>scheduler is invoked by the polling interrupt so it will not be invoked.
>
>Not completely true. The schedular only works when the system was in user
>mode when the interrupt occurs. The schedular is never entered when the
>computer was in supervisor mode.

It depends on what exactly you call the "scheduler".

If it is the part of the IRQ2 handler which is responsible for job
switching (and only this), then this is true.

If you are speaking about the whole IRQ2 handler, then it is wrong:
the "scheduler" (IRQ2 handler) is always entered as long as IRQ2 is
not disabled.
Then the "polled interrupts tasks" are executed.
If entered while the interrupted job was in supervisor mode, then
the "scheduler" exits immediately (without doing any job switching).
If the scheduler was entered while the job was in user mode, then task
switching may happen (depending on cumulated job priority) and the
"scheduler tasks" are executed.

As you can see, ATOMIC routines are interrupted by the "polled
interrupts tasks" (and "external interrupts tasks" as well, i.e. IRQ5)
but NOT by the "scheduler tasks".

>>SMSQ is much more lenient in tracking tasks than
>>jobs, and it's self-cleaning properties are not as pronounced here, this
>>is because tasks usually cater for various IO related things, which are
>>not parts of jobs, but usually of system extensions. Interestingly
>>enough, they can be set up by any job.
>
>Not true either. Tasks should only be set up as part of device drivers,
>either permanently or temporarily. When they are permanent, the system
>doesn't need to clean it up (sounds normal). When a task is temporary, it
>has to be part of a channel (either real or simulated). When the job which
>owns the channel is released, the task is also removed ! Note that "dummy"
>(or simulated) channels which are not in the channel table can be used for
>this !

You forgot about "things". Because of the particular mechanism implemented
into the thing system (when (force-)removing, (force-)freeing a thing or
when a job using a thing commits suicide), you may perfectly use "tasks"
(either external interrupts, polled interrupts or scheduler tasks) related
to a thing (the thing as then to provide code to unlink the task when
necessary). This is probably (with device drivers) the safest way to add
tasks to QDOS/SMS.

Thierry.

Date: 	Mon, 04 May 1998 18:09:31 +0200

Thierry Godefroy wrote:
> 
> At 21:33 02/05/98 +0200, Joachim Van der Auwera wrote:
> 
>> When a zero timeout is used, the OS test immediately whether the call is
>> possible and return immediately if not.
>> In that case the schedular is never entered !
> 
> Beware, some QDOS calls always exit through the scheduler (even with zero
> timeout if applicable).
>
That's what I thought too...
>
>>>The surest way is of course to disable interrupts. The job
>>>scheduler is invoked by the polling interrupt so it will not be invoked.
>
>>Not completely true. The schedular only works when the system was in user
>>mode when the interrupt occurs. The schedular is never entered when the
>>computer was in supervisor mode.
>
Of course, you need to be in Supervisor anyway to disable the
interrupts. If the scheduler was entered while in Supervisor trying this
would result in a big mess...
>
> It depends on what exactly you call the "scheduler".
> If it is the part of the IRQ2 handler which is responsible for job
> switching (and only this), then this is true. 
> If you are speaking about the whole IRQ2 handler, then it is wrong:
> the "scheduler" (IRQ2 handler) is always entered as long as IRQ2 is
> not disabled.
> Then the "polled interrupts tasks" are executed.
> If entered while the interrupted job was in supervisor mode, then
> the "scheduler" exits immediately (without doing any job switching).
> If the scheduler was entered while the job was in user mode, then task
> switching may happen (depending on cumulated job priority) and the
> "scheduler tasks" are executed.
> As you can see, ATOMIC routines are interrupted by the "polled
> interrupts tasks" (and "external interrupts tasks" as well, i.e. IRQ5)
> but NOT by the "scheduler tasks".
>
Which is why polled tasks and external int (will get to this later)
shouldn't allocate resources because to do this they would have to use
the same 'atomic' calls...
>
>>>SMSQ is much more lenient in tracking tasks than
>>>jobs, and it's self-cleaning properties are not as pronounced here, this
>>>is because tasks usually cater for various IO related things, which are
>>>not parts of jobs, but usually of system extensions. Interestingly
>>>enough, they can be set up by any job.
>
>>Not true either. Tasks should only be set up as part of device drivers,
>>either permanently or temporarily. When they are permanent, the system
>>doesn't need to clean it up (sounds normal). When a task is temporary, it
>>has to be part of a channel (either real or simulated). When the job which
>>owns the channel is released, the task is also removed ! Note that "dummy"
>>(or simulated) channels which are not in the channel table can be used for
>>this !
>
> You forgot about "things". Because of the particular mechanism implemented
> into the thing system (when (force-)removing, (force-)freeing a thing or
> when a job using a thing commits suicide), you may perfectly use "tasks"
> (either external interrupts, polled interrupts or scheduler tasks) related
> to a thing (the thing as then to provide code to unlink the task when
> necessary). This is probably (with device drivers) the safest way to add
> tasks to QDOS/SMS.
>
Exactly what I meant, force removing can be dangerous. It is perfectly
possible to allocate memory by a job and set up say a polled task link
there and then force remove the job resulting in some interresting
effects... all it takes is calling the link polled task routine. But
then, that's just garbage in, garbage out, and of course everything but
good programming.

Nasta

>Joachim Van der Auwera wrote:
>
>>
>> I repeat, what would be best and in the SMSQ spirit would be to have
>> semaphores with a timeout mechanisms which is provided by the OS !
>>
>
>Let us discuss this again as I believe that this is THE fundamental
>feature of QDOS/SMSQ/Stella which makes it "better".
>
>... large bit snipped ...

Let me state again why I believe that semaphores are also needed.
For starters let me say that I assume everybody is doing a QDOS discussion.
If you want this to be about Stella, that's good, but I don't think a lot
people know what Stella is, and even fewer have read any docs about it. I
have seen very small parts, and I assume only Jochen and you (Arnould) have
seen more. But back to the point.

Atomic routines are in a way a simple semaphore. In principle you have a
resource "allow uninterrupted execution" (meaning no job switch), which is
used to access shared data structures. If an external interrupt occurs
during this call, the scheduler is not entered. I assume that if a job uses
a lot of atomic routines, this could probably increase it execution time
relative with other jobs at the same priority (btw our priority system is
better than most!)
In practice, I have contention over the font cache in PROforma. There are
two possible solutions : atomic access or putting them in a semaphore.

When using atomic access, I am sure that if my job reaches the start of the
font cache handling routine, it can always start immediately. However, as
these routines can take quite a while, all other jobs are halted. If my
above assumption is true, jobs can hog the processor this way. On the other
hand, this might allow slightly higher throughput in total !

When using semaphores, you have several different resources which have
contention over them. Job switching will continue to run normally even
though there is someone using the cache and maybe some other jobs using some
other semaphore controlled resource.

>What do you think about this "semaphore forbidden" philosophy?

It is a choice, I can live with it either way. I can not say for sure which
choice is best. Might be a good idea to show this discussion to Tony.

>Would some of the remaining QL users help to try something?

You know I would.

Joachim


2. GOLDFIRE HARDWARE

Date : 16:19 06/08/97 +0200

Hi to all,

I have noticed several posts recently that mention the upcoming GoldFire
by conjecture, and I'd like to clear things up on what the GoldFire will
be, when, and why.

First of all, the GoldFire isn't really a SGC successor - it would be
highly unusual for anyone but Miracle Systems to introduce such a board
- it's more like a replacement. Readers of the now perrished magazine
IQLR will remember that I have co-authored an article about a possible
QL replacement, titled 'Is it time for the next leap?' in which plans
were layed out for such a machine.

The key features were a faster CPU, much improved interfacing so that
the confinements of the QL's 8-bit bus can be overcome, and usage of
standard PC hardware whenever possible, but not if it proves
counterproductive.

However, market demand dictated a different approach, one which in the
long run actually turned out even better - it was high time to introduce
high resolution graphics and more colours to the QL, so the developement
of the Aurora had precedence. Besides, the approach outlined in the
article was rather drastic, and it became obvious that QL users prefer
to take things step by step, if nothing else because of financial
reasons.

However, the idea of a new QL hardware didn't die. During the
developement of the Aurora, it came to light that even bigger things
were possible, if only we could get more out of the QL's aged expansion
bus. At the time Motorola announced it's MCF5102 ColdFire CPU, and with
it came the possibility of producing a QL expansion board with quite
decent performance. Of course, wy not learn from history and look at
what users demanded most from the previous product of that kind, and
simply strive to give it to them. So, the goals were:
- Expandable memory, as used on PCs
- Full capability parallel port
- Faster
- Retain all the features of the previous product of it's kind, i.e.
SGC.

One of the problems with using a ColdFire was that it has a multiplexed
bus structure (the QL bus is not multiplexed) and it always operates on
a 32-bit wide bus - therefore, to address the existing QL peripherals,
converter circuits would be needed. I was explaining this problem to a
friend of mine, and mentioned the lmited capacity of the QL's 8-bit bus,
when he said something that made lightning strike - why demultiplex the
bus at all? As I started explaining, I realised that it would be
possible to operate the QL's bus in a multiplexed fashion to have real
32-bit transfers on it, and still maintain compatibility with the
existing peripherals. At this moment the project went from a fun fantasy
to 'must be done'. This is how the GoldFire came to be, and we would
like it to be that next leap.

Many people will undoubtedly ask why this product isn't already on the
market. Well, aside from the fact that I have a daytime job, and usually
use the night time for sleeping :-), no one is more eager than me to
make it see the light of day. However, it is a very complex and quite
sophisticated piece of hardware, if you don't mind me saying so - this
is all the more apparent considering the list of features:

- MCF 5102 operating at 33 MHz, according to Motorola this should be
about 5 times faster than the SGC.
- Expandable memory, 2 32 or 36 bit SIMM sockets can accomodate PC
standard 72-pin SIMMs, either two single sided or one double sided SIMM
can be used, of any capacity from 4 to a maximum of 128 Mb.
- Full parralel port, ECP/EPP capable
- Universal floppy controller, can even accomodate 2Mbit/s tape drives
and can operate 'in the background'
- Automatic 4 to 12V power supply imput, works in 5 or 9V systems, even
in standard QLs
- Substantially improved bus (16Mb 8-bit addressing range, 128Mb 32-bit
addressing range, speed configurable for each peripheral so even Qubide
may be substantially faster with a modern hard disc, improves Aurora
graphics speed by a factor of 3, includes automatic bus termination).
This may be the most important enhancement in view of future
developements - the QL's 8-bit bus has become a dead end.
- Provisions for a second CPU that can work in parallel with the
on-board one, or an alternative CPU upgrade

As you can immagine, bringing all this into a rounded whole isn't easy.
I will be honest - the software to support all the added IO will
probably be the tricky part. Even making the ColdFire work implies a
bootstrap program that configures it to act more like a 68000 which is
what QDOS/SMSQ expects. IO addresses and areas have to move from the
addresses QDOS/SMSQ normally expect - they were at the addresses beyond
RAM, and as the available amount has increased, the addresses have
changed. Even initialising the IO chip used is a chore - it is a
standard PC component which is Plug and Play compatible, and figuring
out how to usefully initialise it took the better part of a week, no
thanks to the fact that PnP specs aren't really understandable by anyone
but Microsoft (So much as to using standard PC components - you get more
problems than you have bargained for). Many users are fooled into
thinking that having a full parallel port will automatically make it
possible to use parallel port oriented peripheals like scanners or hard
drives - wrong! Even on a PC you need a driver for each one, and
consequently the same is true for the QL - but obviously you don't get a
QL driver when you buy the peripheral. Because of this we are prepared
to give any details of the GoldFire hardware to prospective driver
writers. HOWEVER! This does not mean that programs should be written
that directly access the hardware - if this is done, once a new product
is introduced, we will have this problem all over again! In order to be
able to accomodate all the eventual drivers, we have provided a means to
fit an on-board EPROM onto the GF with up to 512kb capacity.

Most of the GoldFire hardware specs have been decided on, and I am
developing a prototype as fast as I can. However, at this point
everything isn't carved in stone - I'm willing to discuss any aspect of
the GF, and in fact some added features are still being debated uppon.
Obviously I am on this list, and if anyone wants to send a direct email,
my address is zeljko.nastasic@zg.tel.hr.

I will continue posting developements as they occur, but for now I will
not waste space and your time any more.

Regards,

Nasta

Date : 11:17 07/08/97 +0200

Spike wrote:
> 
>>...I look forward to its launch.
> 
> I think we've all been doing that since we heard about it last year....
>
Well, mea culpa, mea culpa, mea maxima culpa... Unfortunately it doesn't
help. One of the problems with getting GoldFire out is that our PLD Chip
supplier hasn't lived up to the promises. We intended to use a family of
PLDs that were supposed to be already available, made by Cypress. The
specs looked just like they were made for us :-) unfortunately, aside
from learning another language to do the hardware description, no-one
has yet seen a working chip. Because of this the prototype will use MACH
231 xhips as found on the Aurora. However, this means more larger chips
and in the end, it's virtually impossible to make a printed circuit
board (PCB) that will hold all the hardware, at the size we would like -
it has to be larger than originally planned, and consequently more
expensive. In addition the MACHs use more power. Simply put - the added
wait means about 50 Pounds less cost for the customer. You decide if
it's worth it.

The GF is a board exactly the size of an Aurora, roughly 10x16 cm.
Unfortunately, some very unorthodox techniques are needed to be able to
fit two SIMMs on it and still be able to plug the board into a QL (We
know there will be users that will want this). Because of this the
available area is substantially reduced - in fact, the SIMMs and the
various connectors take up about 50% of the available area. In addition,
because there is only one way to put the SIMMs in, the board cannot have
a heatsink (like the big gold one on the SGC), which means it has to
have a highly efficient (= far less heat) switching power supply. It is
highly possible that this will be the very first 6-layer PCB in QL
history (GC, SGC, Qubide and Aurora are 4-layer), with components on
both sides of the board. Beleave me, that is _not_ an easy board to
design. We had originally planned only 4 chips (the reason for this
should be apparent - there isn't space for more!), but if the delays
with Cypress continue, it will have to be 5 or even 6. Needless to say,
before that is decided and the logic fitted into the two or even three
programmable logic chips, PCB design canot be even contemplated. Also,
all the chips are in surface mount packages. To give you an idea what
handling them means - the ColdFire is a tiny chip, square, about 1.6 cm
to the side, and has 144 pins. The distance between the pins is 0.2mm
and the pins themselves are 0.3mm wide. The same goes for the
programmable logic chips, which have 100 and 160 pins. We will need to
find someone with a facility to solder those chips, otherwise we'll have
to pay dearly for the necessary tools. And, of course, I'm only talking
about the hardware.

Before I get through as someone who constantly whines and complaines,
let me just add that the delays have lent time for numerous refinements
of the original ideas, which should ultimately make it a better product.

Nasta


3. AURORA/SGC/GC ELECTRICAL ISSUES

Date : 12:38 18/09/97 +0200

Phil Borman wrote:
> 
> Roy Wood wrote:
>
> > I have built a few Aurora / Qubide/ SGC systems now and one
> > of the oddest things that I have come accross is this sizeist
> > message 'FAT is wrong try rebooting'.
> > Some setups seem to give this message when the SGC is
> > plugged into the Qplane but not when it is plugged into the
> > end of the Qubide. Some work OK in the normal way. I have
> > so far not been able to find out what the difference between
> > them is so, if you consistantly get this message try sticking
> > the SGC into the end of the Qbide and see if that solves the
> > problem. Steve Hall seems to think it is due to noise on the
> > lines and stuck an RF choke across the active lines of the
> > voltage regulator on the SGC.
>
> Phil Borman wrote:
> 
> Exactly right... If there are line noise problems, reading a large
> file (like the map) off the disc will show up these problems almost at
> random. If you are getting map corruption on power-up, change your
> hardware configuration. If the map  can't be read properly it's unlikely
> you can read files safely either.
> Qubide is _very_ sensitive to line noise problems. It really needs
> terminating resistors on the data lines... There was some talk of
> producing an add-on board for Qubide to put
> some terminating resistors on, but nothing ever came of it.
> 
The problem is caused more by the interraction of the hard drive,
cabling and the system than anything else. Noise is the root cause, but
this actually comes from the SGC, and _especially_ from the GC if one is
used. Both use very fast chips to buffer all the data and address lines,
but there is no real termination. Qplane only solves some of the problem
because it has parallel terminators, but in fact, series termination
would be needed. This would have involved a resistor in series with all
the GC/SGC data and address lines, andmost control signals, very
impractical. However, I do learn from nistakes (preferably other
people's :-)) ), so the GoldFire will have those series resistances
built in.

The reason why a particular drive influences the noise problem is that
some of them actually have the IDE bus (as opposed to QL bus!)
termination resistors built in. About a year after the Qubide was built
the documents outlining the IDE/ATA standard suddenly sprouted an
appendix where it was said that they simply forgot to take the
termination issue into account, and that new designs should include
termination. Rest assured that if ever there is a Qubide II it will have
them.

There are several solutions to this. Many users have noted that if they
remove the 5V operation jumpers on the Qubide regardless of the fact
that it does operate on 5V in their system, the problem goes away.  In a
noiseless system this would mean that the Qubide power supply is
actually around 4V, considering that the operating margins are 5V +- 5%
this obviously shouldn't work. However, in 98% of all cases it does. The
reason is that the protection circuits in the chips on the Qubide
actually shunts the noise spikes from the IDE and QL bus into the Qubide
power supply. This has two consequences - first, the spikes are
dissipated, because the Qubide uses them to supply part of the power,
and secondly the power on the Qubide is closer to the operating margins.
Be advised that this is actually a very precarious situation and if the
GAL1 on your Qubide burns out, that's your problem. This will happen
almost as a rule if the system is powered from several power supplies
(say, the QL from one and the disc from another, or the QL and discs
from one and the SGC from another). If you have put everything into a PC
case, and it's powered from the same power supply, in all chances it
will work for ages.

BUT! the floppyes, hard drives etc all make their own power supply
noise. In the original QL this was not an issue since the QL was powered
from a different power supply so there was no way for noise from one to
get into the other and the other way around. In a tower case there is,
because the 5V line that powers the drives and the backplane and
whatever is literally one and the same. Hence, the problems with various
systems - some will behave and some will not. Add to that the various
versions of cabling which the users put in themselves, and the number of
combinations rises further.
The remedy for chronical non-behavers is:
1. A ferrite ring with the power lines (ONLY the power, not ground!) to
the floppies and hard drives wound a couple of times through it.
2. Change to the system by moving the Qubide around. If you have an
Aurora and Qplane, you can try plugging the SGC into the Qubide, and
even the Qubide into the Aurora and then onto the Qplane.
3. A small toroid choke across the 7805 regulator on the SGC instead of
a wire to convert SGC to 5V operation. 22uH rating for at least 0.5A
current should be OK, but it's not critical - avoid bigger than 47uH.
The same can be done for the Qubide too. Remember that 5V converted
boards must NEVER be used in a 9V system or they will suffer a very
horrible death.
4. Series termination in the IDE lines. Some lines have to be cut and
resistors put in to bridge the cuts. Lines D0 to D15, A0 to A2, /IOR,
/IOW, /CS1 and /CS3 should have this modification done (23 lines), the
resistor values should be 33 or 47 ohms. All other lines should be left
as is. Look for a schematic in the next issue of QL Today. If you are
planning to do this, think it over - a short-circuit on the IDE cable
could be disastruous for the drive and or Qubide.

A number of users have had problems with GC/Qplane and Qubide - the
Qubide produces the copyright message and then stops. The Qubide driver
relies on a fully operational clock. A GC converted to 5V operation by
shorting the 7805 regulator (on the golden heatsink) is NOT enough. Read
the Qplane manual!!! To make the GC clock work a wire needs to be
soldered across two components on it. If the clock doesn't work the
Qubide driver will continue waiting for the initialisation of the drived
forever, and no Initialising message and dots will appear.

Nasta

> Nasta, does this also apply to the Super Gold Card, Or is it just the Gold
> Card? Frank
>
For most of the above reasons, yes. I don't really know why (I have only
ideas about it) the GC can be much more problematic than a SGC. Things
are occasionally made worse by the fact that some GCs will work only
with a 'fast' ingot in some cases. It has never been explained exactly
why. The clock mod is not necessary on a SGC, because the SGC clock
already has it built in since it was designed with a 5V operation
feature. BTW I strongly advise against using it, because it makes the
power and ground routing even more wrong than it already is. This will
be especially true if it's powered from a different power supply from
the rest of the system - the various voltages should then come up in a
propper sequence and that is impossible to guarantee.
I have seen drives that just misbehave and that's it - then you put them
in another system and they work a treat. Unfortunately, when I had
developed Qubide drives were different and much slower, and even today
(and I have used tens of different drives) I never have this problem on
my system, so it is very difficult to test for them. I must confess that
developing the Qplane and Aurora was in a large extent an attempt to
somewhat standardise the QLs out there because it would have become
impossible to figure out all the interractions.
I have forotten to mention that good and NEAT power and ground routing
is very important. I have seen systems that look like an Australian bush
impersonation made out of wire. That is NOT the way to do it.
Unfortuantely, some of those work and you just can't get their
users/creators to do it properly. Then, they add a piece of hardware and
it turns out to be the last straw - the whole thing won't work suddenly.
Even though it seems obvious that the most recent add-on is to blame,
you'll have to take my word for it - in most cases this is not the case.

Nasta

Yes - and it is _not_ good practise to get 5V on the QL motherboard by
connecting a jumper in the original 7805 regulator socket to short +5v
and +9v.  That increases the 5v rail considerably.  Best is to solder
the back pin of the expansion connector to the nearest point of the 5v
rail on the QL motherboard.  Easy to find as it is the wide track
running beside the ROM port (and outside it - _not_ the wide track
running underneath the rom port pins) and there is a convenient via near
the J1 expansion connector.

Tony

>What was the A0 problem, exactly? I didn't know there was one on GC/SGC?
Well Stuart knows the _right_answer.  GC (and SGC) generate A0 for the
rom port, and apparently the gap betwen ROMEO (as Stuart calls it) going
low and address line activity (Thold) is too short.  The solution was to
get the PLD to generate A0 for the memory, and waste a pin.   Anyway
that is how I interpreted Stuart's argument.

Mind you we still have one I/O and two IN available still - any
suggestions on what to do with it.  I thought of feedback to identiify
whether memory was write protected (we have a WP jumper) but 'write
error' would cover that (and a _real_ write error).

Tony

4. AURORA SERIAL PORTS

In article , Roy Wood
 writes
>    Having just typed a message in about the serial ports for the
>Aurora something struck me. Tony's and Nasta's explanation
>of the pins which are common on the the two serial ports led
>me to try swaping the ports over. this is the answer to the
>problem. The two serial ports are labels wrongly in the
>manual. If you change them over it works !! The thing that
>leads you to the conclusion that you are using the right port
>and it is wired wrong is that you can still type on the wrong
>port and it communicates.  
I have checked Aurora board with continuity meter and the ports are:

        -------------                    SER1
       |             |          
       |             |        3  RX   ->   1489 p13 ->   8049 p6/21
       |             |        4  RTS  <-   1488 p11 <-   8049 p35
       |             |        5  TX   <-   1488 p8  <-   8302 p13
       |             |        6  CTS  ->   1489 p10 ->   8302 p6
       |             |
       |             |                    SER2
       |             |
       | IPC         |        3   RX   ->   1489 p1  ->  8049  p6/21
       |             |        4   RTS  <-   1488 p3  <-  8049  p36
       | 1 2         |        5   TX   <-   1488 p6  <-  8302  p14
        -------------         6   CTS  ->   1489 p4  ->  8302  p7
       /    \
     SER1    SER2

This actually agrees with the manual I have in Appendix A (contrary to
what I emailed you Roy) and with the QL motherboard (8049/8302).  Note
that DTR is NOT as labelled in the QL manual, but simply the standard
constant DC as per the rest of the world.

RTS on Aurora is what the QL describes as DTR - ie what the original QLs
signal was (it was never a real DTR).

>From the wiring of the pins, it looks as if one uses 9 way ribbon cable
with IDC 9D plug at one end and IDC 10 way DIL at the other (leaving pin
10 unconnected at Aurora end), then this should give standard IBM AT
pinouts.  ie configured as MODEM (ser2 type) for both ser1 and ser2.

Note for a modem, any standard 'straight through' cable will work.  Do
NOT rely on 9D to 25D moulded convertors.  They are designed for the IBM
environment and the ones I have tested do not connect RTS. Note often
modems also  do not enable RTS as default.  This MUST be enabled in the
modems configuration - put it in NVR.  The rest of the world think
computers/modems do not need RTS - the QL does.

If wiring a printer lead, connect DTR to the real DTR (pin 4 on 9 way D)
- do not use RTS.  

Note bump polarising and pin 1 mark go towards the 8049 IPC socket -
production Auroras are NOT fitted with bump polarised locating plugs).

I haven't got the connectors myself and haven't tried this.

Tony


As I said, the electrical connections are 100% - although 'RTS' on
Aurora is the QL 'DTR' of course.  How is your cable wired?  (It should
be RTS-RTS, CTS-CTS TX-TX RX-RX)

As Nasta is using both the original 8302 and 8049 connections, the rom
must be accessing the hardware in exactly the same way as the std QL
motherboard surely.

Is that right Nasta?

Dilwyn Jones has too experienced Aurora handshake problems using the
Miracle dual port ser-centronics convertor.

Tony

Yes. There isn't really a thing you can do differently with 8302/8049 so
on the Aurora it is completely the same as on the QL. Well, with one
tiny exception - the 'combining' of ser1/ser2 RX inputs is done by a
wired AND, but that doesn't make any difference to handshake anyway.

The only real difference is that both ser1 and ser2 are wired the same
(no DCE/DTE version as on the QL) AND the QL's 'DTR' has been renamed to
'RTS' which is the propper signal to be used anyway. DTR is, per
RS232/V.24, defined as a handshake signal that will be high when the
_D_ata _T_erminal is _R_eady, meaning initialised and MAY (if RTS has
the correct level) recieve data.

The handshake versions are:
1) none - only RX/TX and GND are used in a cable. In fact, software
handshake can be implemented (XON/XOFF).
2) RTS/CTS - RX,TX,RTS,CTS,GND used in a cable.
3) RTS/CTS DTR/DSR, also known as full handshake,
RX,TX,RTS,CTS,DTR,DSR,GND used in a cable.
Note that ONLY DTR/DSR handshake is not difined in RS232/V24, although
when you look at the logic of how it works it can work, which is
probably responsible for half of this confusion. To make things worse,
the QL 'invented' yet another non-standard version: CTS/DTR handshake. I
don't know if the remaining, RTS/DSR is around too, but I wouldn't be
surprised.

To turn a device with type 3 handshake into type 2, the best way is to
tie DTR and DSR on that device together. To turn a type 2 into type 1,
tie RTS and CTS on that device together. And, finally, to tirn type 3 to
type 1, connect DTR to DSR and RTS to CTS on that device together.

Another small note: On modems DTR is used to automaticlly terminate a
connection if the equipment connected to the modem is turned off. To
give this signal the proper level, it has been wired to +12V over a
resistor. If the modem you use doesn't want to send data to the
computer, it is possible that it has DTR tracking turned on, in which
case you need to:
   1) either connect the DTR line from Aurora to modem
   2) Or connect DTR and DSR on the modem together
   3) Or disable DTR tracking on the modem by the appropriate command (for
      which you need a working connection, though :-) ).

There will be an article in QL today to do with the misteries of serial
comms soon, which should explain how to make any connection work.

> Dilwyn Jones has too experienced Aurora handshake problems using the
> Miracle dual port ser-centronics convertor.

This is unusual - but it is certain the plug would need rewiring as, if
I remember correctly, the converter gets power from the handshake lines.
If the incorrect one is used (RTS vs DTR), it will not handshake.
      
     Nasta

>This is unusual - but it is certain the plug would need rewiring as, if
>I remember correctly, the converter gets power from the handshake lines.
>If the incorrect one is used (RTS vs DTR), it will not handshake.
I made this one and it was wired correctly!
ie RTS - RTS
and DTR for power - ie in place of the QLs +12V orange lead.
I will have to build another ser-centronics for Aurora and try again...

     Tony

In my previous message I said that if 10 way IDC DIL and 9way IDC D
connectors are connected with ribbon cable, then this will give a
standard 9D AT pinout. This is the cabling that Ron Dunnett of Qubbesoft
provides (and fits when he makes up tower cases).

What is wrong is Nasta describing the 10 way DIL as a standard IBM PC
pinout.

IT IS NOT.  Standard IBM 10 way IDC to 9D and 25D adaptors will not
work, as they follow the PIN NUMBERS on the AT 9D connector (ie 1<->1
2<->2 etc).  It is quite a nice error though, as IDC connectors are easy
to make up, and the off shelf convertors come usually with a parallel
connector on a card mounting bracket.

Maybe this is the key to some people's problems.

Remember that the serial port hardware (8302/8049/1488/1489) is
identical in function the  std QL.  Only pinouts on the 1488/1489 are
different.  Nasta connects in a much more logical way.

I find with QL motherboards that some 8302s do not drive the 1488
consistently.  Some 8302s give a low output voltage.  Note the essential
addition of resistors to the 8302 to tie 2 8302 pins (RAW1 & RAW2) to 
-12V is catered for on the Aurora pcb. 

Best way of testing the ports is construct a loopback lead. Remember
that this cannot be straight through as on the QL - the two Aurora ports
have same pinouts.  Swap TX/RX (2/3) and RTS/CTS (7/8), connect to ser1
and ser2 and use following function:

  5 DEFine FuNction sertest
  10 t$='Test':tag=0:CLS
  20 OPEN#3,'ser1':OPEN#4,'ser2'
  30 FOR j=75,300,600,1200,2400,4800,9600:REMark and 19200 for Hermes
  40   PRINT#3,t$:PRINT#4,t$
  50   INPUT#3,a$:IF a$<>t$:PRINT j,'Fail 1':tag=1:ELSE:PRINT j,'Pass 1'
  60   INPUT#4,a$:IF a$<>t$:PRINT j,'Fail 2':tag=1:ELSE:PRINT j,'Pass 2'
  70   END FOR j
  80 CLOSE#3,CLOSE#4
  90 PRINT:if tag:RETurn 1
  100 RETurn 0
  110 END DEFine sertest

This has been typed live here, so of course may contain bugs (:-)#
(but I am sure you get the picture).  This is practically identical to
the test routine that THorn-EMI used to test QLs in the factory.  I have
a test jig they used with basic code built in to a ROM.  This used the
same pcb as the infamous ROM dongle.

Often with a faulty port the function will get stuck at an input line.

     Tony


 Davide Santachiara writes:

>Yesterday I prepared a serial adapter for a QL Printer (with serial
>interface). What I did was simply to check the connections between the
>original QL to printer connectors and adapt them for a 25 to 25 pin
>connector as used with Aurora and sH.

>Unluckily the result is that the printer works with sH serial 3 but not with
>the Aurora port.
>
>Here are the original QL ser1 to QL printer serial port connections (QL
>ser1-QL printer):
>
>QL ser1-QL printer
>
>1-7  gnd
>2-2  rxd
>3-3  txd
>4-20 cts
>5-5  dtr (rts)
>6-6  12v
>
>So I made a 25-25 cable as follows (Aurora-QL printer):
>
>Aurora serial port - QL printer
>2-3  txd
>3-2  rxd
>4-5  rts
>5-20 cts
>7-7  gnd
>20-6 12v (DTR)
>
>Where's the problem ?
You haven't said how your Aurora to 25D connector is wired.
The issue is how is the 25D to Aurora wired?

Aurora is in fact designed for 9D connectors - pin to pin with IDC.
Nasta got the pinouts wrong and it will NOT work with STD IBM PC
connectors.  IBM connector is designed for 1-1, 2-2, 3-3, and so on to a
9D AT connector.  Nasta assumed, wrongly, that they used IDC straight
through.  That is not so.

Wiring should be, if you follow the Aurora philosophy of 'modem' ports,
as per PC:

       Aurora        25D

         5    TX      2
         3    RX      3
         4    RTS     4
         6    CTS     5
         9    GND     7
         7    DTR     20

Pin 6 (DSR input) is not implemented on Aurora - it has standard QL
ports.  Other than that, you wiring from your Aurora 25D to printer is
OK, as long as your connect to Aurora 10 DIL as above.  Note pin
numbering as per manual - each row is 1/3/5/7/9 and 2/4/6/8/10

     Tony


5.  EMULATOR SPEED COMPARISONS

I made some speed comparison tests on the following platforms:

1. QPC v1.40 - AMD 200 MHz - 32 Mb - Matrox Mistique 220 - VESA driver
   operating system SMSQ/E 2.84a
2. QXL 68040 20 MHz - Pentium 100 MHz - 40 Mb ram - Matrox Impression
Lite
   operating system SMSQ/E 2.85
3. SGC 68020 24 MHz - Aurora graphic card - 4 Mb
   operating system SMSQ/E 2.85
4. QemuLator running on AMD 200 MHz (like QPC) 
   operating system Minerva + Lightning (640 Kb memory)

I did two tests

Test 1: QSI speed index (it mainly measures CPU speed)
Test 2: LIST of a 640 line SuperBasic program (mainly measures graphic
speed)

Here are the results

Machine      Test 1         Test 2       Graphic factor
              QSI           Listing time  list time*QSI
1             200             7s           1400
2             410             5.5s         2255
3             270            23s           6210
4             180            16s           2880

The QSI speed index measures the relative speed compared to a Gold Card
QL: QSI 100 is the speed of a Gold Card QL.

The QSI test was performed with Archive238 with some QPac 2
buttons.

Here are my comments on the above results:

1. QemuLator, which is a Windows 95 program, compares very well with QPC
   as rough CPU speed (180 Qemulator, 200 QPC on the same machine).
   Instead the graphic speed of QPC is much better.
2. QPC graphical speed is the best of the four system. QPC v1.40
   has not only the smoothest graphic but it is also the fastest.
   This can be seen in the graphic factor. Greater the number, worst
   is the graphic speed compared with the CPU speed.
3. The QXL is still the fastest system as rough CPU speed. It has
   also an interesting graphic factor, pity that the graphic smoothness
   is realy poor (screen is refreshed slowly - this is the usual QXL
   limitation of poor i/o)
4. QemuLator graphic speed is in the medium range. Anyway I consider very
   good both its CPU and graphic speed because QemuLator is a native
   W95/NT C compiled program and it compares very with "competitors"
   in both fields. Please note that QemuLator used Minerva + Lightning
   while other systems use SMSQ/E.
5. SuperGoldCard(+Aurora) graphic speed is really poor. Maybe NASTA will
  explain better why. I suppose this has something to do with the old
   QL bus conception. I remember that Nasta said that with the Coldfire+
   Aurora combination the graphic speed will be improved a lot.

Bye to all

Davide Santachiara

> SuperGoldCard(+Aurora) graphic speed is really poor. Maybe NASTA will
> explain better why. I suppose this has something to do with the old
> QL bus conception. I remember that Nasta said that with the Coldfire+
> Aurora combination the graphic speed will be improved a lot.
> 
Yes, the reason for this is that the 8-bit QL bus which is used by the
Aurora to communicate with the SGC is very slow. The GC/SGC made this
problem a bit better by using memory shadowing of the original QL screen
RAM area. However, since the Aurora displays higher resolution, the
original screen RAM area isn't sufficient. Unfortunately, the GC/SGC is
unaware of the new addresses used so the data transfers to and from
Aurora go at the slow 8-bit speed. The GoldFire will improve this in two
ways - first, the Aurora is capable of performing accesses about 2.5
times faster than the 8-bit QL bus currently manages. the GF will be
able to operate the bus at this improved speed even in 8-bit mode. Also,
the GF will be aware of the Aurora screen RAM addresses and will shadow
them too, so the net speed improvement should be signifficantly over
2.5x.

Nasta

6. BENCHMARKS

Promise kept, here are the benchmark results...
(these numbers are proportional to the speed:
the higher the better)

      \ benchmark progs:
       \  PRINT       FLOAT     RECURS       STRING       STRING2
--------+---------------------------------------------------------
QL       :  960         860         13          1140     (untested)
UQLX     : 5240        6900         96          8140          63
Q-emuL   : 1780        7520        105        9240-9300       74
QLay     : 5160-5280  7640-7740    111       9700-10020       77
QLay-d6  :  880          "          "             "            "
QLay-Lnx : 4700        7980        115         10220          80 
QPC 1.41 : 9500       17500        136         39080         408 
QPC 1.15 : 9300       16640        134         37700         394

(For comparison, here are Jan Venema's (QLay's author) figures:
CPU          PRINT FLOAT STRING  Configuration
QL original    980   840  1100   128k, JS-ROM
SandyQL       1080  1040  1360   512k, Floppy, Par. Port
486/DX2-66     660   980  1300   QLAY081 -d2 (Win95)
486/DX2-66    1080  1360  1820   QLAY082 -d2 (Win95)
486/DX2-66     920  1440  1840   QLAY083 -d2 (Linux)
Amiga A1200   2500  1800  2000   QL emulator, with 68060 at 50MHz
PentiumPro200 6000  8000 11000   QLAY082 -f2900 ! [mine has -f2700])

The emulators and QL:
- QL: a brave old unexpanded QL with JM rom.
- Q-emuL: Q-emuLator for Windows,
          version alpha, fifth release (March 1998)
          rom JS (the one provided with QLay082)
- UQLX: release "12/17/97  19:33:38", rom JS (provided along with UQLX).
- QLay: version 082, -d2, rom JS
- QLay-d6: same version 082, -d6 
     (PC graphic mode 1024x768, each QL pixel is mapped to 2x3 PC pixels)
- QLay-Lnx: (Linux) version 083, -d2, rom JS (from UQLX)
  (I couldn't use -d6 flag with QLay083, I got "can't initialize graphics")
- QPC 1.41: version 1.41 (SMSQ/E 2.84) with 4MB RAM
- QPC 1.15: version 1.15 (SMSQ/E 2.81) with 4MB RAM

All emulators turned on Pentium166MMX, 32MB SDRam, Win95 OSR2
with a graphic card S3 Trio V64+;
UQLX and QLay083 under Linux 2.0.29; 
UQLX was compiled by GCC using the provided makefile.

The benchmark programs:

- "PRINT","FLOAT","STRING" are the parts of the QLay SuperBasic benchmark
program (qsbb_bas), slightly revisited: I replaced the line t=DATE by
the lines
   t=DATE
   REP synch: IF t<>DATE: EXIT synch
   t=DATE
and as a result, each test lasts exactly 21 seconds (plus time to quit
the loop). Another change made is to add CLEAR before the tests: this
speeds up the "FLOAT" part. (...why ?)
The final results are the number of times some (sequence of)
instructions may be executed in 21 seconds. It is rounded up to 
the next multiple of 20.
"PRINT" prints integers from 1 on to the screen
         (MODE 4, csize 1,1, white on red)
"FLOAT" computes SIN(1), LN(4) and EXP(10)
"STRING" performs the rather strange operation
   a$="abcd...z": b$="ABCD...Z": c$=fill$(a$&b$,[number from 1 to 20])
   (replace the dots by the expected consecutive letters)

- "RECURS" is meant to measure the proc/fn calling speed. It counts
how many times we can call fibo(10) in 21 seconds, where
DEF FN fibo(n)
  IF n<2: RETurn n
  RETurn fibo(n-1)+fibo(n-2)
END DEFine
is a VERY unefficient way to compute Fibonacci numbers.
(oh yes, perhaps does the speed of this program speed depend also
on the arithmetic stack handling)

- As I was surprised by the astonishing speed of QPC at string
handling, I devised a program "STRING2" to see if it was not due to 
the particular form of QLay's benchmark. "STRINGS2" plays with &,
INSTR$, and splicing, in order to achieve a quite natural task.
Well, QPC performs even better on my program...

(If anybody is interested, I can send the actual programs to the 
ql-users list)

Some remarks/questions:
- Figures are not as precise as they look like, specially for emulators
  that have to multitask with other programs. My first tries with QLay
  did lead to figures like 5280-7740-10020 for the QLay benchmark trilogy,
  whereas I cannot presently achieve much more than 5180-7640-9740.
  I don't know what changed (not the program anyway, it's in both cases the
  revisited version.) 
- All tests are actually S(uper)Basic tests. Since I haven't any
  connection (yet !) from my old QL to my PC, I don't have access to
  my QL programs and so cannot devise pure 68K tests.
  Would they lead to significantly different results ?
- QPC performances are quite erratic compared to the others (particularly
  fast on STRING, comparatively poor on RECURS).
  In the QPC/others comparison, the SMSQ/QDOS differences must play
  a role, specially with string handling and "RECURS". Could SMSQ
  knowledgable people explain why ? I'm quite curious about this !
- also, QPC and QLay work full screen (and QPC bypasses Win95),
  whereas Q-emuLator and UQLX live in their own windows. This could
  explain some differences (specially Q-emuLator print speed; overall
  speed as well ?)
- How well would perform text-printing accelerators ? They should accelerate
  the PRINT figures a bit for JM or JS roms.
- I am a bit disappointed by UQLX speed, compared with its
  reputation.
  At first I though it could have something to do with my Linux, 
  X or GCC settings, but its performance
  on its own benchmarks B1 and B2 seem normal (2.0s for B1, 4.9s for
  B2), or at least not abnormally slow, when compared to figures
  given for a Pentium 133 (4s and 9s without some optimisation flag).
  According to Jan Venema's figures, QLay082 is much faster than QLay081,
  so perhaps was some time ago UQLX indeed the fastest emulator after QPC ?
  [question to native english speakers: in which order am I supposed to
  put the words in the above sentence ? :-)]
  Or does UQLX perform (comparatively) better on other hardware
  configurations ?

Frederic van der Plancke
vanderplancke@agel.ucl.ac.be

Some remarks about your benchmarks:

- First of all you cannot seriously benchmark an emulator using a
  S(uper)BASIC program. Why ?  Because QPC runs SMSQ/E with
  an "interpreter" which is almost as fast as Qliberator (i.e. a
  SBASIC program will run as fast under SMSQ/E as a QLiberated
  version of the same SuperBASIC program under QDOS/ARGOS/Minerva).

- Second, you can't get reliable results with QLAY as this emulator
  does not get true polled interrupts (you have to specify how much
  emulated 68000 instructions are to occur between two emulated
  "interrupts" when configuring QLAY; this is highly imprecise of
  course). The only solution is to use the real time clock which is
  very imprecise also (+/- 1s on each result) and thus will oblige you
  to run the benchmark with a very high number of loops in order to
  do the measurements on 500s or more (be patient !).

- Third, UQLX speed depends greatly on compilation options you gave
  (particularly on Pentium processors; Cyrix processor are already
  very optimized and the UQLX speed is therefore much less dependant
  on what optimisation options you used). Moreover my benchmarks proved
  that UQLX speed also depends on what Linux headers version you are
  using at compile time (headers bundled with 2.1.x Linux kernels result
  in a 10% speed increase over headers of Linux 2.0.x kernels).

I would therefore recommend to use either pure machine code programs
(bogomips for a rough evaluation of the emulated CPU power in Mips),
C compiled programs (dhrystone and the likes) or compiled (either
QLiberated or Turbo-charged) SuperBasic programs (TEST909_obj is good
and evaluates the machine speed in many domains).

Thierry.


7.  POWER SAVING MONITORS

Davide Santachiara wrote:
> 
> This message is for Nasta ;-)
> 
> Many SVGA monitors have power saving features: standby, off etcetera.
> 
> Wouldn't it be possible to make a software control for Aurora to put
>in standby or in off status the monitor (like many screen savers on PC do) ?
> 
Well, not really, at least not at all easily. There are two ways to put
a SVGA monitor into standby - one is when it notices that there are no
synch signals coming to it for a certain period of time. If you have a
monitor which is capable of detecting this, it will go into low power
standby when you sitch off the computer, which is in fact what I do with
my system. Of course, i immagine you are looking at it from a viewpoint
of a BBS operator, where the monitor is mostly off and the computer is
running. There is no way for the Aurora to switch off it's synch
signals, unfortunately. I'm almost tempted to publish the schematics and
dare anyone to figure out a way to add that signal into the logic :-))
I'll give you a hint - all pins on the logic chips are used up :-)
The second way is through the use of several previously unused lines on
VGA as a serial interface to pass VESA commands to the monitor. I don't
have any hard data on these, and for the same reasons as above it would
be difficult to incorporate them onto the Aurora, but that does not mean
it cannot be added from outside (hint: using Hermes or Superhermes spare
IO lines, for instance). If anyone has any specs on this, I'd be glad to
look into this.

Nasta

Ian wrote:
> 
>      About POWERSAVING in a Monitor
> 
>      My monitor has that facility but reading the manual I see
>      that if there are horizontal and vertical syncs present
>      there is no power saving. When my screen is blanked the power saving
>       does not operate so something extra must happen. I guess that is what
>      Davide was after. So there could be a general interest in a nice
>      simple solution, be it soft or hard but preferably no soldering.
>
OK, I have looked into this and it seems that it would be possible to
switch off the synch signals on Aurora when the blanking bit in the
display control register (same location as on the QL) is set, which
blanks the display. This would have to be done by reprogramming the
MACH211 chip adjecent to the monitor Connector (small rectangular chip,
PLCC44 package). I have not tried to generate the modified program for
it and do not know if it is possible to fit the required modification in
the chip without a different pinout resulting because of the
modification. If a different pinout is generated, it will not be
possible to modify the Aurora in this manner. Even if it ultimately is
possible, the fact that there is no synch when the display is blanked,
may confuse some older monitors, so this is yet another thing where
incompatibility may result. I'll look into this some more as soon as
time alowes, watch this space.

Nasta


8.  DRIVERS & NEW HARDWARE

Thierry Godefroy wrote:
> 
> At 18:56 08/01/98 +0100, Zeljko Nastasic wrote:
> 
> >Claude Mourier 00 wrote:
> >
> >> Scanners, CD ROM drives, Tape Drives, Video Capture, the usual.
> >> The only problem with SCSI is that you've got to choose WHICH SCSI.
> >> There are about 5 different variants, I think. Possibly more.
> >
> >Amazingly enough everyone forgets about drivers?
> 
> Well, before you can write drivers, you need the hardware...
>
Oh, not at all - if you are making hardware, you know a lot about it
well before you start actually doing a prototype - that's the moment to
start thinking about a driver, because whoever is writing it can still
influence the hardware design to make it mutually easyer, before
anything is set in stone.
>
> >What will you do, use the ones provided on the dikettes you get with
> >your SCSI devise, made for the PC????
> 
> Of course not on a QL compatible with just a 680x0 on it, but in the
> case of the QXL (or QPC, UQLX, etc...) YES!
>
Oh, OK, I do completely agree, but I was talking about a QL, not an
emulator, since that was what the thread was about - hardware fro the
QL, namely a hypothetical sound or SCSI card. Although, there is one
point you have missed here but I'll get back to that further down.
>
> For other type of hardware (sound, ethernet, etc...) the only problem is
> to be able to extend the software interface part so that we can make the
> corresponding calls to the BIOS/DOS/Linux kernel.
>
This will not work anyway if you don't have anything to talk to on the
QDOS/SMSQ end of things, and that is in my view the main problem.
Consider any type of a 'interface' device, which by itself really has no
function untill it is defined by what is on the other end of the
interface. Examples: network and SCSI. Even on the PC all the
implications of such devices are avoided - did you know that there is no
reason why a SCSI card couldn't be configured as a SCSI target, or that
a SCSI bus can have more than one initiator? This means that two
computers could be networked over SCSI - yet no-one does it. That's only
the hardware end of things. Now, as to the QDOS/SMSQ end of things, you
still need to treat various specific devices in a specific way, no
matter through what 'interfacing' you go - so, in the end you still need
a driver. This can be simpler, as the interface and the other, non
QDOS/SMSQ end can handle a lot of the work, but it still has to be
there.
As it stands, we can't even do device mounting on as simple things as
multiple IDE drives, let alone handle devices whose main purpose is not
data storage. If you think WIN_USE is made in the spirit od SMSQ, or the
N1_xxxx way of setting up network destinations is made in spirit of SMSQ
then you are dead wrong - what it does is very dirty. Immagine you
couldn't make any assumptions as to what it is you are accessing at the
other end of an interface, like in the case of networks, or ATAPI or
SCSI used by non-storage devices, and as far as QDOS/SMSQ goes, you have
hit a wall, no matter what you use to get to the hardware. Any kind of
'layered' device is all but impossible to set up under QDOS/SMSQ, unless
it is suitably expanded. I am aware that most people will not see the
connection between what I'm talking about and what the thread is about,
unless they are say, fammiliar with UNIX. If they come from a PC, they
can't see the problem, and it is precisely why SCSI on a PC can be
extremely tricky. We don't want the same thing to happen on the QL.
>
> >Same thing about fully bidirectional ports - people seem to expect that
> >if you provide a EPP parallel port that all EPP devices will somehow
> >magically work on the QL - and they are to the last of them completely
> >different and all need to have their own drivers! So who is going to
> >write them, assuming they will ever be able to get the data on how to do
> >that for a specific device???
> 
> I think you can make a more intelligent approach. Think about Linux: it
> already got ALL drivers a QLer could dream off. The only question is then
> "how to take benefit of them". Here is my answer: write a Linux to QXL
> interface...
>
To tell you the truth, I have been wondering why no-one has already done
it, considering that there are UNIX emulators about.
>
>... and then the only thing you
> will have to write is a small SMSQ/E thing that will be responsible for
> passing the requests to this software interface together with parameters.
> The software interface will then issue the corresponding Linux call (just
> a call to a C library function) and will pass back the result through the
> interface to the SMSQ/E thing. You can then write a small C library so that
> the SMSQ/E thing can be accessed with standard C library calls. Voila!
> You can then port ANY C programs to SMSQ/E without any problem because the
> C library functions will EXACTLY correspond to the Linux ones (which are
> now POSIX compliant as of Redhat v5.0 release).
>
In effect, rendering SMSQ into Linux? But then why bother - don't port
the programs, run them under Linux - theywon't run without Linux being
interfaced to a QXL anyway! Or, better still, if you can't live without
QL programs, port them to Linux, that's much easyer than the other way
around. I always try to look at it this way. If you have a QXL which
cannot work on it's own without the operating system on which top it
sits, then it isn't a QL any more. It's just a question of what you
consider the essential part. Why am I saying this - simply, because
there are far more things to do under SMSQ than just device access, and
even if it was only that, I for one beleave that the way SMSQ handles
that is about 60% approaching the _right_ way, which is to say, simple,
efficient and elegant. Which is not what I can say about W95! I can say
that about Linux, too, in fact Linux as most UNIX OS is even closer to
the ideal in this respect. To me it is obvious that there is no use even
trying to make SMSQ more W95-like, except maybe in how it appears
(graphically thought out), but if we would do things in SMSQ the way
they were done in W95, then we wouldn't be here having this discussion -
the QL would have died 8 years ago. What Micro$oft does cannot be
supported by a handful of people. M$ has 5000 programmers who are payed
to do the job and yet they don't. I'm sorry, but anyone who has the
faintest idea on what an OS is supposed to do can see that, only if that
person is honest about it.
Linux is completely different altogether, and much closer to SMSQ. But
do we want SMSQ to become Linux? At the cost of being able to run
applications on it? Isn't it easyer to then just use Linux, and forget
about all the complications?
Or, isn't the point in being able, so few of us as there are, to glean
the best of all worlds, and think, _THINK_ about how to best use that
knowledge, and how to make it work for SMSQ, just as the rest of it
works, simply, efficiantly, and elegantly? So that in the end it can,
because mere humans can still grasp it's workings, be better than it's
peers it used to look up to?
Yes, call me a sick idealist, but if we all werent, why would we be
bothering to do anything for the QL - after all, it's been dead for over
a decade? Lets face it, we are all doing it because we think it has
Potential, yes, with a capital P. Here you have in a nutshell, why I
still stick with the original hardware - because with all the other
hardware, I need to have workarounds.
> 
> Of course this CANNOT apply to Aurora/SGC/UGC/GF, this is ONLY possible for
> QXL and UQLX... And then is a question for you Nasta: WHY THE HELL DID YOU
> NOT DESIGN AN IMPROVED QXL INSTEAD OF THE GOLD FIRE ???  (well, this question
> could as well be directed to Miracle, just replace Gold Fire with UGC...).
>
I think we have both combined given an answer to that:
1) Miracle was supposed to do that, and I'm not Miracle. No, I'm not
looking to place any blame, in fact, just the other way around - I have
nothing but the highest regard for Stuart, and the fact is if it wasn't
for his ample help, none of you out there would know about me today, nor
would there have been any of my designs around. The harsh fact is, we
can't do everything, and there are priorities - all this takes money and
tons of time. Some of us (not me, fortunately) have to make a living out
of it too, or if we make a living in some different manner, the QL ends
up on the backburner.
2) Because I think that the GF will in the end outperform any improved
QXL I could ever design. Not by the raw data of how many MIPS (and you
know the translation of that - Meaningless Info about Processor Speed)
it does, but the feel the user will get. It works on it's own turf, and
does not have to think about interfaces of any sort, nor ways to get
around silly limitations of the 'native' OS running on the computer we
plug this QXL into. Again, I will explain this some more below.
3) The GF opens possibilities that have benn frankly just impossible on
the QL before, without the need to cater for 99 things to get that
desirable 1 out of 100. No, you do not need PCI, or 1Gb/sec high speed
memory channels, like the PC world would like you to beleave (Favorite
quote from PC magazine: 1.6G disc space is not enough for office use -
NOOO, only 1 million printed pages is surely far less than I would write
in a lifetime!!!!). What you need, is to use the resources efficiently.
Having such an overkill just alowes you to be sloppy about it and still
have a usable product.
>
> The FACTS are that ANY QL specific hardware will ALWAYS be MUCH MORE expansive
> and LIMITED than any PC hardware, so why not use PC hardware?
>
I am sorry, but this is just not true, full stop. If you think so, you
are thinking the PC way - yes, you buy a graphics card for $30 and
that's all you pay. Really? How about the software, the drivers. They
came with W95? Didn't you pay for that? How many people have bought W95?
Calculate what the driver actually cost then, and you will come to a 5
or even 6 figure amount of $. Considering that, I think you are actually
overpaying what you get. What worth is it to me that I can plug in my
latest PCI card when it will take years for the driver to be written, at
which point I would have invested a lot into making that driver, and
card would have already be obsolete several times over, and even PCI
will be obsolete. We have to look at things in perspective.
>
> I think that
> a redesigned QXL with a dual ported RAM... a PCI
> interface (yes, I know, PCI is CRAP but it's the standard now and it's faster
> than ISA anyway...), and a 68060 (or 68040 for the low end version) would
> be just marvellous...
>
Yes, and we don't even have a 256 colour driver for the original QXL,
and we have the QXL, and our PC into which it is plugged, has had 256
and more colours capability for ages already? Using your previous
argument, it costs us 'nothing' - and yet we don't have it. Can you
immagine how long it would take to get around the QL end of such a board
(and that's the easy part), or around the idiocies of PnP on PCI? It
took me a month to figure out how PnP works so that I can write down
something for the prospective GF device driver programmer, on how to
initialise the IO chip!
I know that somtimes people think, why bother doing it on the QXL when
ISA is dead, but then you are looking at a single problem using two
completely clashing standards. At the speed we manage to produce the
software, working in PC terms of hardware, we would never have a single
working product. We don't want to end up barking up the wrong tree! You
don't beleave me? We want things to be really efficient, huh? OK, then
let's write our own interface to our latest whizz-bang S3 graphics card
we plugged into our PCI bus. Now, for an exercise, try to get specs on
that S3 card, and see where it gets you. Hope that explains any
lingering questions on why I didn't use a PC graphics scip on the
Aurora.
And another note on PCI.
PCI is a brilliantly simple idea cancerously grown into something
grotesque, by trying to make it, as we say here, everyone's mistress (to
avoid using stronger language). And, before I forget, it has a flawed
hardware assumption in the execution of that simple idea, which not even
the beginner designer would make (The original spec catered for 16 PCI
slots on a bus, yet no-one dares put more than 4 to be sure it will work
at all - ever wonder why?).
To boot, someone hatched PnP (and they say there is no moral need for
abortions!) and grafted it onto that. If it were not for the fact that
you never know what resources the next PCI 'standard' reincarnation will
add or subtract, it could be usable. If you decide on a minimum set of
options and use only them, then either you can't be sure it's going to
be compatible, or, you have to design the rest of the system too, using
the same minimum set - at which point you have used someone elses
standard (PCI) to interface your own designs, which you could have
avoided altogether and had your own interfacing method designed which
makes life easyer for you, the designer.
No thanks, it is enough work without having someone trying to pull out
the carpet beneath your feet at the same time. If anyone wants more
clarification on this, I'll gladly provide it.
>
> For your info, if nothing comes from TT for the QXL to PC (DOS) interface,
> then I am ready to write (or at least try to write) a QXL to Linux interface
> (this of course will depend if Tony is willing to send me the existing DOS
> and SMSQ/E sources and the associated documentation).
>
This is in my view a very good idea. Yes, you might say it disagrees
with what I have said above, but actually it doesn't - the reality of
the matter is, we need to keep our QL's in whatever form they come,
working - if we plan on giving up,we just might do it right now, and
save ourselves a lot of time and money. Besides, it is refreshing to see
that not everyone thinks W95 is the only OS on the planet (God help that
planet if it were!).
>
> You were seeking for volunteers, then you got one !
>
Depends on what you are volonteering for :-)
You are highly welcome to send any personal email to my address (easily
found at Thierry's web site). I never shun from discussion.
>
> QDOS / SMS forever (and I do think this, really) !
> 
If I didn't, then I wouldn't be doing this :-)

Nasta

Jerome Grimbert wrote:
> 
> } |Amazingly enough everyone forgets about drivers? What will you do, use
> } |the ones provided on the dikettes you get with your SCSI devise, made
> } |for the PC???? When you connect your SCSI device to a PC, have you
> } |noticed it does nothing at all on it's own, unless you have a driver
> } |(some of which come bundled with W95, some with the hardware itself, and
> } |some of those just don't work at all).
> } |Same thing about fully bidirectional ports - people seem to expect that
> } |if you provide a EPP parallel port that all EPP devices will somehow
> } |magically work on the QL - and they are to the last of them completely
> } |different and all need to have their own drivers! So who is going to
> } |write them, assuming they will ever be able to get the data on how to do
> } |that for a specific device???
> }
> } Touche...
> 
> Yes, the SCSI interface should provide a very low level interface,
> so that anyone with some documentation could be able to perform the operation
> that is desired.
>
I'm sorry to say that it goes much deeper than that. What we would need
is separating the file-to-sector part or the driver and the
sector-to-hardware part of the driver. The SCSI driver would only be
able to do SCSI type data transfers, which are essentially block
oriented and can be directed to and from logical devices. Sometimes the
blocks are also numbered, i.e. correspond to a physical storage space. A
HD driver which would go on top of the scsi driver would be responsible
for converting files to sectors and sector addresses - in effect, that
would be the file system, and it would have nothing to do whatsoever
with the hardware, at least not directly. Changeing the parametes would
make this 'file system' driver equally well serve most sector oriented
hardware, be it IDE, SCSI, EPP, network or merely RAM or Flash ROM, or
even a file on a device which is already implemented by this 'file
system' driver (anyone remember QXL.DAT files :-))) ). Of course, for
some devices there owuld just have to be specialised drivers, but that's
a relative minority.
>
> For HD, we would also need to rewrite the file system so as
> to get more than the 128Meg limit. (the only limitation I have ever seen in
> the QDOS Trap call is the Format thing, which return a short... and you cannot
> extend it to a long. All others TrAP may support an extension to a long(32bits)
>
That is already done. Format isn't really a problem because it only
returns the number of 'sectors' which have a basically undefined size,
you have to look that up anyway. For most uses, the number returned is
useless anyway, it's just passed back to the calling routine, say, to be
printed. Since this is normally the SB format command, or part of an
application, this can easily be worked around.
>
> BTW, 64 bits addressing capability (for a single file!) is already available
> on modern OS (no, not W95! It may be modern, but it's not an OS!)
>
It's not modern, either. The kernel of QDOS is still a model of
modern-ness as far as W95 is concerned. The clothes do not make a
person, and in the same manner, the looks don't make an OS.
>
> (well, 32 bits is only 4 Giga)
> 
Which only pertains to the maximum file size, which isn't that big a
limitation - it may be for the PC, where they'll soon need 4G programs
to open a window, but certainly not for the QL. And, since we don't use
our QL's to maintain CIA databases, even the possibility of having data
this big in a single file is relatively remote. And, if we do, we can
always construct a special 'file system' driver, hang it on the normal
'file system; driver used in sector access mode, and say, access 4G
sectors, each 4G in size. I think that would be enough for a while?
Besides, the actual limit is: 4G maximum file size (32-bit file
pointer), 4G 512 byte sectors total drive size (32-bit logical sector
pointer in direct sector IO). Unfortunately, most drivers use a 'FAT'
based system which also ties those two limits together resulting in he
actual limit being the lower of the two. A linked list system would be
more appropriate if slightly slower and more complex than a FAT driver,
but it would maintain the original limits.

I have two main objections to the way the QLs file system works.
1. You cannot make a difference between logical and physical devices by
legal QDOS/SMSQ means. This is HIGHLY problematic if you want to
implement any of the popular network standards, and for that matter even
the QLs own network standard. And, it prevents you from doing the nice
things I mentioned above. It's not obvious from the above, but this also
means you cannot make 'drives' out of subdirectories. If assigning
logical to physical was possible, any devices you would care to use
would just be entries in a directory tree, with storage devices being
directories, and serial devices being files. That way you would have
true device independence. And, last but not least, the assigning
capability would eliminate all sorts of _USE commands which work in a
quite dirty way, modifying system tables that they shouldn't even know
about in the first place.
2. There is no separation between file name and file path, resulting in
the name length limit problem. If only a file name limit was 36 chars,
that would be fine by me, providing the directory name doesn't count.
Making the later a QDOS string, 32768 chars, would be perfectly adequate
:-). Since a path could be stored on a 'per-job' basis, and inherited
from the starting job, similar to the TK2 defaults (which are terribly
underused!) then this would eliminate the need to specify the devices
mentioned in (1) with the whole path. Besides, given the assigning
capability mentioned above, you could always assign 'aliases' of devices
deep in the device tree, to exist in the top level, to make them easily
accessable.

Nasta


9. PROWESS 

From : Joachim Van der Auwera
Subject: [ql-users] new ProWesS window manager 

Dear all,

Considering recent ramblings about device drivers etc, I think it would be a
good idea to share some ideas about future developments. As some of you may
already know, I am currectly working on the next version of the ProWesS
window manager.

I personally believe this new version will be superiour to the current
version, making it much more powerful. I would like to invite people to give
their comments.

The design document follows below (please skip this message if it doesn't
interest you). Implementation details are also available if you want, the
bit diven here are the "general ideas".

Joachim

Ideas for ProWesS window manager v2.00
--------------------------------------
General :
---------
    Need better handling of objects. The efficienty and power have to be
    increased. Very important would be to increase reusability of object
    types and the better handling of events to promote this.
    Methods to implement this : let the objects themselves determine their
    size and the size and positioning of the daughter objects. There is no need
    for handling borders around the active area as that can be handled by
    parent objects (in a systems object tree).
    It is probably better to handle all coordinates in complete pixels.
    Scaling can probably be handled more intelligently than is currently the
    case. The current system where object positioning is relative with their
    level in the systems object tree is badly flawed. The parent container can
    change from a row to a column if an extra level is introduced above it.
    This can be remedied by introducing types which explicitly handle the
    placing of the objects in it. If an object then places it's children in a
    row, this will always remain the case.
    Similarly, much more advanced positioning methods should be introduced
    to handle the positioning of objects in the object tree.
    There also has to be a better way to handle the switching between
    objects which can catch keypresses. The current explicit switching is too
    cumbersome to set up, there should be a more automatic method.

Solutions / Principles :
------------------------
Improved positioning :
    All the objects which can be created should be positioned in an object
    tree. Each system contains one object tree, and this tree represents all
    the menu objects in window which can represent that system.
    The object types can be of two types : branch objects or leaf objects. A
    branch object can have both branches and leaves connected to it, but a leaf
    object can not have any children in the tree.
    Visibly speaking, all objects are always fully contained within the parent
    objects area. Though the virtual area may be bigger, the visible part can't
    be.
    Objects which are at the same level in the object tree are contained in the
    same parent objects and may (partly) overlap. The objects are ordered
    from front to back.
    To make window redrawal efficient, the parent object has to indicate
    whether it is possible that the children overlap.
Switching catch objects :
    All objects (or their type) have to contain a flag which indicates whether
    it is possible to catch keypresses. This doesn't mean this is always the
    case, but just that it could be possible. Objects could then be stored in a
    linked list of "possible catch objects" when they are added to the system.
    The order in this list is then the default order for switching catch object
    with  or .
    Of course you have to be able to explicitly change the order of objects in
    the list.
    When moving through the list, it should still be checked whether the object
    is effectively willing to accept catch events.
Better message handling :
    When handling messages using the PWChange keyword, the system should first
    ask the recipient object to handle the event and if this object does not
    wish to handle the event, it should pass the event to the parent object to
    allow that to handle it. This continues to the grandparent object etc.
    To make this process slightly more efficient, the object type should be
    allowed to indicate a mask of messages it may accept. This can allow the
    PWChange routine to know in advance if it can skip an object as it will
    never accept the message.
Event handling :
    It would be better to have specific event handling routines for each
    possible event. This is more efficient, and gives cleaner code. To reduce
    the memory consequences, there are general handlers for the type and
    handlers for the specific object. The object is searched for the event
    handler before the type.
Keypress handling :
    All keypresses should be handled by the system (this is also true in v1,
    although this was not stated as clearly).
    It should be possible to query whether a keypress is already "in use".
    Each keypress can be connected to an object. When the keypress is
    activated, a user defined (e.g. "keystroke") event is sent to that object.
    Obviously several keypresses can be attached to one object. However, each
    keypress can only be linked to one object.
    Keypresses are case sensitive. However, when a key is pressed which is not
    linked to an object, then the case is changed and a matching object is
    searched again.
Deleting objects :
    To make sure that objects can contain links to other objects and make sure
    their internal state remains consistent, all objects in a systems need to
    be notified when an object is removed from the system.
Size of an object :
    The size can be determined by each object individually. The object can
    determine its preferred minimum size.
    Normally speaking an object will make sure that the contained objects are
    completely visible (except for things like a "multiple document
    interface").
    Each object has to assume that it's size can be changed later on. It can
    be notified of this (to allow resizing of child objects or making sure the
    internal state remains correct). This is because the parent may resize the
    objects to conform to some kind of rule (e.g. all equal width).
    In principle, a window has a fixed size (which may be scaled), but some
    objects may tell the system that they want to resize depending on the
    amount of information in them and the amount of screen space. These objects
    can then resize ("autosize") their contents (they should know the possible
    increase amount). When there is room left, the next object which allows
    autosize will get a chance.
Difference between an event and a message :
    An object can react both to messages and events. In principle events are
    generated by ProWesS itself and messages can also be sent by the user
    (using the PWChange command).
    The difference on the implementation level is that an event is handled at
    the object level - thus may be different for each object - while a message
    is handled by the type.
    Also messages are forwarded to the parent object when on object doesn't
    handle it itself.
    For queries the query is normally passed on to the parent object when not
    answered, but it can also be flagged that only the designated object can
    get a chance to answer the query.
Handling of messages :
    Contrary to implementation of PWChange in v1, it is better that there are
    separate routines for each possible message. These could be grouped in the
    type in an array (fast searching when using binary search - best to do the
    sorting and counting at runtime (when loading)).
    Each message should get two objects as parameter. The object which receives
    the message, and the object to which the message was passed.
    To make the trapping of messages even faster, you can set a mask which
    indicates that only events which match the mask can (not will!) be trapped
    by the type.
Creating a system :
    The handling of move and rescale should be initialised by the types. The
    system should not incorporate how this is interfaced to the user. There
    should be a type which is intended to be the top level. When this is done
    compatible with v1, the type will implement the scaleborder.
    When a type is created in a new system, then the system has to include the
    top level itself (if the object was not of that type).
    In principle this should not be necessary, but it is done for extra user
    friendlyness and for compatibility with v1. This option can be disabled by
    passing a creation tag.
Data abstraction :
    All the internal data structures should be hidden. Each routine should get
    at least two parameters : the object identifier (used to interface with the
    ProWesS system and the ObjectData structure. This structure can be defined
    by the type.
Default rules for adding objects to the system :
    In v1, there are containers which alternate between a row and a column.
    The system will rely on two types being available for rows and columns.
    It will automatically insert objects of these types when necessary.
    (e.g. when using position_left/right/above/below).
Redraw of objects :
    The system should redraw more often. In particular, the window should
    redraw at the end of each access to the system if the system is active and
    does not need resize.
Creation tags :
    The concept of creation tags is removed. An object always exists and the
    objectdata can grow/shrink at any time.
On demand loading of types :
    Might be useful to support on demand loading of types. When ProWesS is
    started it just checks for the available types and only loads the types
    when they are used in a program.
Config options :
    - load a type
    - mark directory with types (load all or check to load when needed)
    - define constant (should be kept and passed to all types when loaded)
    - mark that a type should be reloaded (either does this immediately or
      later)
Multitasking options :
    The ProWesS interface should be defined in such a way that it would be
    possible to access a ProWesS object from a different job then where it was
    created. This means that there should be no direct access to memory except
    in some well defined cases (e.g. the ObjectData). Definately no access via
    object id's which happen to be pointers (very deadly).
    The multitaskong stuff will not yet be iplemented, as I fear that this may
    slow the system down.
Window scaling :
    It will be possible for a window to be bigger than the area which is used
    on screen. This can be because the window is larger than the screen, but
    may also be forced otherwise.
Unknown messages :
    The handling of unknown messages has changed. In v1 you would get an
    ERR_IPAR after handling the remaining messages. This is now different. If a
    message in not recognized by the object, its parents or the system, then
    the tag is just skipped. For debugging purposes however, you can query the
    system to know how many messages in the last PWCreate or PWChange were not
    handled.


10. DIRECTORIES

Joachim Van der Auwera wrote:
> 
> -----Original Message-----
> From: Tony Firshman 
> 
>>Hate the IBM, and I do, or not, the directory structure (and file date
>>stamping) is pretty robust.   The QLs is awful.  OK level 2 drivers go a
>>little way, but we desperately need some system of removing path from
>>file name. With hard disks, one has to go through hoops to get the
>>length down.
> 
> Hey this can be done. I do it in syslib. Have to admit it is quite a bit of
> work. I would prefer it if the file header would only contain the name
> inside the directory, but this would give problems at least on a physical
> level (change of disk format)...
>
No, only change of the way the directory structure is parsed within the
driver, changes to the 'path' would have to reflect that the driver has
to automatically look it up and parse it to find which actual directory
file you are accessing, then use the file name (the 36 characters) to
find ONLY the file in that directory. In effect, no actual path names
are stored on the disc at all, they are built dynamically in the drivers
own internal data structures. These must also be somehow associated to
the channels, which means that the channel def block would have to
change to either contain or point to a path.
I beleave the fact that file names and paths have become one entity is
only because it is then simple to get the path of the file without
introducing new system calls to get,set,inherit the path. Because this
would have to chage too, jobs would have to hold default paths for
themselves. Possibly, the job header should have an entry (configurable)
which tells the 'execute' command how to set up the default paths - say,
inherit from parent job, copy from system defaults, or even use defaults
defined within the job header.
>
> Another problem may be that a lot of software may rely on the
> current architecture, though many recent programs should be easily fixed,
> especially all programs which use c68 libraries or syslib.
>
I suspect that programs which don't make use of subdirectories, might,
paradoxically, run without any problems on this new system. Problem
cases will be various file managers - you would want them to be aware of
the new subdir systems! Also, utilities such as QMenu which contain file
select menus would have to be modified as well.
>
>>Also the fact that filename_ext uses the same separator as the 'path' is
>>a major problem for programmers.  Phil B tries very hard (in PBOX) to
>>separate a 'real'  file name that is portable to other operating systems
>>but his algorithm cannot be fully successful.
>
Yes, it is a big problem - local to the QL about the only thing you can
do is actually scan for directories, build the directory tree in your
application and then parse names against it to see which part is a file
and which is a directory. If the particular driver supports file names
and directories of the same name, then you have yet another problem and
have to assign priorities.
Making files portable is a big problem. However, this is invariably
handled by specialised software... the easyest way to explain is
handling it like the 'store pathnames' option in ZIP. Even if a new
system was put in place where the file name is only that, and the path
is separate, the only way to keep portable files with their paths is to
recreate the directory structure on the system which stores them, and
that is impractical. The only other way is giving the files alias names,
and keeping a database with a list of which alias actually corresponds
to which file. Dead easy if you had metadrivers, you just make a driver
that does this filtering, and assign it to a device (or directory for
that matter). Oh, yes, did I mention you coud easily then do a 'zip'
device which could use standard zip programs but appear as a device?
>
> Hey, I am personally in favor of moving to Unix filenames, so use slash / as
> directory separator and dot . as extension separator. At least the extension
> separator should easily be implementable, although not too many programs
> currently handle it very well (including Qpac and QD I think). Again, all
> ProWesS software should already handle that properly.
>
Once you have fixed the path thing, th extension becomes a very minor
problem - you can always use the last _ in the filename and use
everything behind it as an extension, and you would cover 99.9% of all
sane needs.
>
>>.... and why on earth did Tony not preserve date stamp on COPY.  I can
>>see no reason at all to change date stamp unless one explicitely wants
>>to.
> 
>Well, that is Qdos for you. When you copy a file, you actually create a new
>one and copy the contents. Whether you copy the dates along is a choice you
>make when implementing copy. Most file managers (except Qpac) probably
>handle this properly (I think, I can only speak for certain about PWfile).
>May I mention that PWfile actually tries (with good success I think) to
>replace extension separator by a dot and limiting filename and extension
>length when copying to DOS disks. Sorry (or rather "thanks") for the free
>advert.
>
:-)
I thought we had:
Creation date
Modification date
Backup date

Then, depending on how we look at things, when copying you either:
a) copy the creation date from the original and put the current date as
the modification date on the copy
b) put the current date as the creation date on the copy and copy the
modification date from the original
I would prefer the first, but then, why do we have config blocks?
Incidentally, you may even change the backup date to the current date on
the original when a copy is made, although, this should probably be done
by dedicated backup commands/programs only.

Nasta

Tony Firshman wrote:

> Mind you major issue, which has had some discussion on intl.ql is hard
> directories/ pathname.
> Hate the IBM, and I do, or not, the directory structure (and file date
> stamping) is pretty robust. The QLs is awful. OK level 2 drivers go a
> little way, but we desperately need some system of removing path from
> file name. With hard disks, one has to go through hoops to get the
> length down.
>
The funny thing about this is that in normal applications you don't
really need the path anyway. Most will just open a file and use it
somehow - once this is done, the path and file name and extension become
a channel ID, and that's the end of it. You can ask the IOSS the
filename etc for that but at that stage you don't really need it.
The only apps that really need to know al of it are disc utilities and
file managers, and even then most of the info isn't needed.
The one piece of code that absolutely must know the whole situation is
the driver, of course.
Let me give you an example - the DEV device is actually made to 'hide'
the subdirectory structure. If you could somehow 'manipulate' DEVs
automatically, this would be a solution to the path/filename problem.
Consider this:
If every job had it's own 'path' (or paths - we could have
prog/data/dest paths much like we have prog/data/dest defaults now - for
all intents and purposes within a single job this is the same!), then,
if every child job this job starts inherits the parent job 'default'
paths, and if there is a mechanism to set and read the 'default' path
for that job, including within the job itself, the problem would be
solved.
Most of the modifications would have to be done to system calls and
Sbasic commands (keeping the mods nicely out of application programs!)
and to apps that manage files, which would have to become 'path' aware.
If you put the limit of the path length as a SMSQ string, you have a
maximum of 32768 characters in the path name. Obviously, under normal
circumstances only a very small fraction of this would be used.
What would happen then, if you, say, did EX 'program'?
Code for EX would look for the filename 'program'. It would look into
'system' defaults (i.e. default path, non-job specific, much like DOS
paths), if it wouldn't find the 'program' there, then it could look up
which job called EX and use it's default paths, and if it didn't find
the 'program' there, it could even suspend the calling job, and pop up a
window and ask you to specify a path, or aboth the operation (This last
one is a bit problematic as SMSQ now stands but wouldn't be impossible
with rather minor modifications).
If the 'program' was found, then it could be started and it's own path
info updated, possibly by:
1) copying the path where the new job was loaded from into the jobs own
'prog' path
2) copying the data path of the callin job into the new job's data path
3) copying the system default dest path into the new jobs dest path,
Or indeed, all this could be configurable on a per-job basis in the job
header.
As far as I know, SMSQ now truncates the filename to 40 characters
before it starts parsing it for a device and file name. However, this
filename starts as a string, and before truncating, new code could look
for subdirectory separators, and thus manipulate the default path, then
once this is done, slash it off, and pass the rest to the normal
filename parsing mechanism - the file name limit would remain 36 chars.
This implies that the directory separator would have to be different
than the '_' now used, and would be used uniquely as the directory
separator, nothing else. Using / and \ would be perfectly fine, even in
combinations:
path = root\files\
'file' = root\files\file
'my_dir\file' = root\files\my_dir\file
'/file' = root\file
'//file' = file
'/my_dir\file' = root\my_dir\file (up one level (/) from root\files\ =
root\, then down \my_dir\file = root\my_dir\file)
You could even use a dedicated 'root directory' separator, to denote
that you are not using the current path but the root directory. Maybe
':' would be a good idea? Then: ':my_dir\file' = my_dir_file, no path
parsing
Also, you may introduce a 'set default' separator which implicitly sets
the new default to what the path name has been resolved to. Say, use '!'
for this. Then, for the example above, '!/my_dir\file' =
root\my_dir\file, only the path becomes 'root\mydir'
Incidentally, combinations like '!/' or '!\my_dir\' or even '!:' should
also be alowed. They would report an error because the filename dosen't
actually get mentioned, but the side effect of setting the path would
still be there. This is for compatibility only - normally, this would be
handled by some sort of a CD command, but it would give limited path
changeing capability even to programs which don't handle subdirectories
at all. Of course, a nice 'channels' utility which can show default
paths and open file paths on a per job basis would come in really nicely
:-) 

These path examples may be a bit confusing because I have not put a
single device name in front of any of them. This was deliberate - if
your system enables such a deep directory tree, you might end up not
needing 'devices' as such at all, provided you have something like the
metadevices I'm so fond of. Most of the root level 'directories' would
then  actually be metadevices, and the ones below would be the actual
'physical' devices and possibly files on them. For instance:
my_QL\floppy\1
     \floppy\2
     \floppy\A
     \floppy\old_720k
     \harddisc\ide\1
     \harddisc\ide\Syquest
    
\harddisc\networked\QL_protocol\ethernet\Tonys_QL:-)\harddisc\ide\1\partition1
     \harddisc\networked\TCPIP\ethernet\201.100.255.255\something
    
\internet\PPP_my_Internet_ISPs_phone_number_login_and_password\HTTP\WWW\directory_structure_with_my_favorite_www_sites
etc, etc ... you get the idea.
How do you make this usable for old programs, or for that matter, for
new ones, so that you don't have to write up the whole pathname every
time?
Easy - we are talking metadevices, these can have their name changed and
things assigned to them as well. We need a 'name' device. Then, we could
assign such a name to any 'directory' and call it like the devices we
are now used to, so win1_ could become my_QL\harddisc\ide\Syquest. This
incidentally also takes care of many of the compatibility issues! The
whole point of MDs are, you could change what win1_ or whatever is,
whenever you need it, and your apps need not know about that at all - no
more USE commands. Oh, and have you noticed that seriel devices also
could work like in the example above? How about this example:
my_QL\console\networked\remote_QL and the old con device automatically
gets opened on a remote QL. Why is this important? Well, do you think
making an intelligent graphics card would be difficult then? Just plug
in a board with graphics hardware and a CPU capable of running that part
of SMSQ (it's modular, remember!), and voila, parallel processing of a
sort!
Look at the last line in the example above. Immagine a new and improved
fileinfo which would recognise such a directroy structure - browser
access would become as easy as pointing to a 'directory' in some new and
improved 'Files' menu, and clicking the DO button on your mouse - all
the rest, including dialing up and logging in, is done by the metadriver
chain outlined above.
People, an expanded SMSQ could almost do miracles, and we keep picking
away at details!

This is becoming long, I hope I gave you enough ideas...

Nasta

Tony Firshman wrote:
> 
>> MSDOS, I think, flags the header as "deleted" but leaves the rest of the
>>data intact, which makes recovery possible. We can't do this in Qdos...
>
>Thought so.  I wonder if anyone has asked Tony Tebby for a level 3
>driver which does keep track.  Would it be at all possible?
>
Yes, a L3 driver would be of course possible (OK, at the risk of being
boring, I wish it was the meta type :-) ). Storage systems which base
their organisation of files on lists, as oposed to FAT (file alocation
table) methods, can do this easily - files are un-linked from the list
of 'existing' files and put into the list of the 'deleted' files. This
in fact has to be done to keep the files reasonably unfragmented. There
is usually another, free sector list, which is then used up for new
files. Once the free sector list becomes exhausted or too short (the
number of free sectors is below a thereshold), the files from the
deleted list are recycled, by order of deletion - oldest deleted files
are recycled first. These systems have a lot of advantages and of course
some disadvantages too. They are not cluster based, so files are stored
to the next larger number of sectors, nor clusters (with large clusters,
there is a lot of wasted space, especially in a system like SMSQ/QDOS
which has predominantly small files). For 32-bit list addresses, the
drive size is usually limited to 2^32 sectors, which is a LOT. Also,
drive maps are not stored in memory, which is an advantage, but also a
dissadvantage too - they take longer to access, and can themselves
become fragmented. However, funnily enough, defragmenting such drives is
not so difficult. The biggest disadvantage is that maintaining the
directory structures and files can become quite complicated if there is
actual data to be moved, because mostly the only thing that changes is
the list pointers - the driver code can get quite complex. And, yes, it
is very easy to implement retroactive directory creation too. In my
younger days :-) I designed one of these as a project for a university
class I took. I still have the specs if someone's interested.

Nasta


11. 

Davide wrote on the subject of Qubide added features:
> 
>> This has nothing to do with it. The [Qubide] ROM code is copied to RAM anyway -
>
>So to add more features to the already full Qubide ROM why not use some
>simple compression algorithm to fit all the code into the rom. Then the
>content could be unpacked to ram.
>
Yes, consider this method:
Use an array of bits for each byte of the ROM code. The bits are 0 if
the byte of the code is 00h and 1 if anything else. Then, delete all 00h
bytes from the ROM code. The net result is about 10-15% compression, the
size of the bit array taken into account. To un-compress, scan the bit
array, generate a 00h on output for each 0 bit in the array, and copy a
byte from the compressed code for each 1 bit in the bit array. A
cleverer compression could also be used but we are facing diminishing
returns on compression, as the compressed code has to have an
uncompressed de-comress code attached.
The actual maximum size of the EPROM code can be 15.25k but it is not
contiguous. In particular, EPROM locations 3F00h..3FFFh (that's where
the IDE registers are), and 3A00h..3BFFh (that was left out for Fastnet)
are invisible. So, in effect, you have 14.5k of ROM, followed by 512
bytes of unused space followed by 768 bytes of rom, followed by 256
bytes of IDE registers.
Provided you can live with GAL2 reprogramming 15.75k of contiguous ROM
code could be used by re-enabling the 512 byte Fastnet 'hole' since as
far as I know Fastnet is not there any more...

Nasta

PS something like an extension of this is being considered for the
GoldFire. The GF has extensive IO expansion addressing capability and it
stands to reason that prospective peripherals should be able to use an
on-board ROM with a driver or other similar data. However, although ROMs
are cheap, sometimes it is an advantage to have less of them, since they
use up less of the expansion area. The idea is to introduce a new ROM
flag. Currently only 4AFB0001 is used, which has to be placed at the
start of the ROM for the system to recognise it. An alternative flag
could mean that a standard decompression algorithm is to be performed on
the contents and the uncompressed code loaded and executed as per other
data in the ROM header.

Tony Firshman wrote about power-fail handling ideas:
>
>>> I do not thing the RomDisk will reduce the memory usage, because it's mainly
>>> a disk emulation, you still need a living copy in RAM...
>
>>Point taken,  maybe we need a mechanism for the RomDiscql to emulate RAM for
>                                   Arghh RomDisq----^
>
>NO - it is not suitable for that, as it has a long but finite life.
>Low power UPS's are not too expensive, and will have a power down signal
>line. That could be linked in to the QL in a number of ways
>(superHermes I/O lines, Hermes pin and so on).  It would be simple to
>write a background task that stopped all current jobs, and saved the
>current RAM to store (hard disk/RomDisq etc) in a file.
>
The propper way to do this is by high-priority interrupt because thet
background task would have to be of a high priority, and there is a
possibility that it could not stop the other tasks in time, too.
Unfortunately, handling sich interrupts is not very easy on the QL but
it can be done.
The tricky part in all this is not making a memory snapshot - that's
dead easy, but alowing the various IO to continue till some stable state
is reached where it can be saved so that when restored, things continue
to function. Data lost in transit isnt such a problem - after all the
alternative is to loose everything, but sometimes the status of the
hardware is very dificult to get because some control registers are
write-only. Without the system (or programs) keeping a memory copy of
all that was written, things could be difficult to the point of being
impossible. A good example is the display control register (which is
write-only) and it's copy is kept in the system variables. Granted, for
most of the IO registers one can make an educated guess on what to put
in them when memory is restored so that the system won't crash, but
sometimes even this is a problem - most QL internal hardware  control
registers are indeed write-only and programming them wrong can result in
an immediate chrash since the software depends on the state of some bits
for proper operation. One such thing that comes to mind is tbe IPC link
communication. I'm sure, though that someone like L. Reeves could figure
it out, thoug :-)

Nasta


12. QBIDE PARTITIONS

>Do any of the PC based QL emulators read QBIDE formatted hard disks?
>If not, is it ever likely to be an available feature?

It is not available, but may be available somewhere in the future. In
principal, it all boils down to what Nasta always calls the metadrivers. In
general, there is a problem with the current device driver structure because
it only is a very general framework with three "standard" device type :
serial devices, directory devices and screen devices. The concept is bad as
it does not make an abstraction of the hardware interface and higher level
implementation.

For example, in general, you can do all possible screen operations for all
possible screens as long as you have functions to
    read a pixel
    write a pixel
    convert between device and real colours (RGB or other)
    convert between real colour and device colour
All other functionality can be built on top of that.

For serial devices, the principle isn't too bad, as not much more than read
or write a byte is available.

For directory devices, the situation is even worse than for screens (it is
also a lot more difficult). For starters, there should be a layer to address
a block (usually a sector) on the physical device. This could be either
microdrive, and kind of floppy disk, any kind of hard disk, and IDE device,
a SCSI device or whatever. Than there is the layer which puts the data in
the blocks. Several of these have to be available, and each of them can use
any of the hardware block read/write drivers. This is necessary to allow
access to disks/devices which use different directory layout schemes.
Examples of these are the QL directory structure, or Dos or Win95 or maybe
even one of the Unix variants.

If it would be handled like that, then the exchange of disks etc would be a
lot easier. It would also be much easier to interface any kind of new device
(as only a much smaller driver would need to be written), and it would be
easier to support new standards when they appear in the outside world....

Joachim

Joachim Van der Auwera wrote:
> For serial devices, the principle isn't too bad, as not much more than read
> or write a byte is available.
>
Yet, if you look at the way they are implemented, they are the most
flexible.
In principle, here is how this all works:
QDOS/SMSQ provides two classes of operations for IO devices:

1. IO management traps, TRAP#2
        supported operations:
        channel open, close, change user, fetch name
        file delete
        format
The first 4 are used for all IO device types, the last two only for file
devices. Open, delete and format require a 'file name' which in
SMSQ/QDOS terms also incorporates the device name in some manner. It
should be noted that simple 'serial', i.e. non-file devices don't have a
name length limit of 36+4 characters, but can be of any length up to the
maximum length of a SMSQ/QDOS string, 32767 characters.
It is somewhat mistifying that file delete is a special TRAP, instead of
either a special kind of open (tricky because open is supposed to return
a channel ID if successful, whereas succesful deletion obviously cannot
do that since what it would be opening a channel to is being deleted),
or format (this would be easyer for a metadriver structure as this can
make devices look like files and the other way around). Also, shouldn't
passing configuration parameters to devices be done by 'formatting' them
in some manner? 
In general the above TRAP#2 calls abstract a device into a channel,
known to the system from there on by it's channel ID, or, the other way
around, given a channel ID change the owner job, or close the channel.
They do not actually do any IO operations as such. Once a channel is
open and you have it's ID, you use the next class of operation, outlined
below.
In my opinion, this is where changes are needed, possibly not in the
actual operations themselves, but definetly more functionality should be
added. There should not, in my opinion, be a difference in device
'types' - window, serial, file, on this level, just like there isn't on
the next.

2. IO access traps
There are a ton of supported operations here, which do not apply to all
types of devices equally. However, since at this point you only use a
channel ID, i.e. an abstract of the actual device, if you ask it to
perform a function it cannot, it will just return a 'bad parameter'
error. In practise, the operation code is put into D0 and passed by
SMSQ/QDOS to the device driver code of the device the channel is opened
to to determine if it can perform it or not. From a theoretical
standpoint, it is completely possible to say, set the background colour
of a file, or adjust the creation date of a window. Of course, it is not
very useful or is just plain nonsensical :-). However, historically
things like this have been used - forinstance, in some early versions
the file pointer position was moved by invoking the window PAN function.
Regardless of the layering of device drivers, or metadevices, this part
of SMSQ should probably be left undisturbed.
 
> For directory devices, the situation is even worse than for screens (it is
> also a lot more difficult). For starters, there should be a layer to address
> a block (usually a sector) on the physical device. This could be either
> microdrive, and kind of floppy disk, any kind of hard disk, and IDE device,
> a SCSI device or whatever. Than there is the layer which puts the data in
> the blocks. Several of these have to be available, and each of them can use
> any of the hardware block read/write drivers. This is necessary to allow
> access to disks/devices which use different directory layout schemes.
> Examples of these are the QL directory structure, or Dos or Win95 or maybe
> even one of the Unix variants.
> 
> If it would be handled like that, then the exchange of disks etc would be a
> lot easier. It would also be much easier to interface any kind of new device
> (as only a much smaller driver would need to be written), and it would be
> easier to support new standards when they appear in the outside world....
> 
> Joachim

I am glad that this is finally realised - in some cases I would
speculate these new drivers that would have to be written could become
almost trivial, if this layering already existed.

One class of device that SMSQ does not have as such is a 'virtual'
device. This would be a device which in fact 'makes' new devices, in
such a way that it 'aliases' names to existing devices. Such a device is
crucial for maintaining compatibility, distinguishing between different
users and what they can access, as well as just good organisation of
one's own computer system. With this type of a device, it would be easy
to dispense with all the various 'xxx_use' commands, and still add more
functionality. A quick example - consider that we already have a new
filing system which uses /\ directory delimiters. How would we make this
new structure compatible with old programs? Simple - we would use a
'compatibility device' which would translate the old names into new
names, and then we would alias it to win, flp, ram or whatever we need,
in as many instances, win1, win2, etc as we need.

I am currently thinking about a mechanism that would make connecting
various layered devices into 'stacks' forming a 'real' device as easy as
opening a file. In essence, Creating a new device would then be akin to
creating a directory in a directory 'tree' from various pre-defined
devices, represented as directories. At the top level you would have the
'master' device representing the 'root' - actually your view of your
computer as a collection of IO devices, and as you go towards the bottom
level you progress through various levels of abstraction, like consoles,
ports, disks, and futher down to file systems in use or network paths,
untill you get to the 'file' level, which represents actual files on the
device or opened channels.
This approach has the added benefit that all the devices you may have
are actually represented as files so the various device tables which
have been limited to a small number of devices in use at a given time,
get to have a much higher limit (I think this used to be 8 and with this
system it would be at least 256).
Since in this sytem some 'directory' paths to a given device could get
very long (a long list denoting the makeup of a particular device), you
could easily use the virtual device to create shortcuts, more similar to
the actual device names we have now.

Nasta


13. HASH TABLES

A Halliwell wrote:
> |wot's a 'hash  table'??
> 
> A slightly complicated data structure based on linked lists, if memory of
> last years lectures.
> 
> They're designed to make searches as quick as possible.
> (Be creating a unique key and placing it at a specific point in a table of
> linked lists... Or something...)

A hash table is a way to maintain quick access of elements in a list.
if you have say a number of persons of different age: 
(mary:19), (sue:22), (john:27), (william:21) 
and want to sort them into an array, the result would be:
a$(1) = 'john':a$(2) = 'mary':a$(3) = 'sue':a$(4) = 'william'
Now we want to place the age of each person in another array:
b(1) = 27:b(2) = 19:b(3) = 22:b(4) = 21

The problem now is that if we want to find out the age of a
person, the list would have to be searched (preferrably using 
binary search). This can be slow if the list is big.

If we use hash tables instead, the index of each person is
determined by a hash function:
a$(h('william')) = 'william': a$(h('john')) = 'john'
b(h('william')) = 21: b(h('john')) = 27
and so on.
and to retrieve the age of say John would simply be a matter of
PRINT b(h('john'))

If the list is large, hash tables is considerably faster than
the binary search method. Hash tables actually work at the same speed
regardless of the size of the list. One disadvantage with hash tables
is that (in the 'sparse' hash table variant) the tables (arrays) need 
to be considerably larger than the number of elements they can hold.
so instead of
DIM a$(4,20),b(4)
perhaps we would need
DIM a$(8,20),b(8)

In principle this is how hash tables work. There are however some
extra quirks to them, such as what to do when two entries yield
the same hash-value ( i.e. if h('william') = h('john') ).

There is also another variant of hash-tables; dense hash tables,
where you place a linked list in each position of the array, and
do not mind if several items get the same hash value (I think
this is what Halliwell referred to). Oh, well never mind.....

Hope this clarified the issue a bit.

/Per-Erik


14.  IPC 8049

I looked at the IPC 8049 QL disassembly (available on WWW) and would like
to add a few comments to it. They are related to the sound control logic
only.


1) The variables at locations 42-4A (hex) have the following meaning/use:
  42: current pitch, updated according to the sound wrapping logic
  43: same as 42, but updated according to the random parameter logic
  44: same as 43, but updated according to the fuzzy parameter logic
      this is the actual instantaneus pitch value
  45-46: 16bit (little endian) time elapsed for this step interval (used by
      the wrapping logic)
  47-48: 16bit (little endian) total elapsed time (used unless the sound has
      infinite duration)
  49: repeat counter (used by the wrapping logic)
  4A: counter of timer ticks lost while executing the interrupt service routine

2) The comments to the routines at 700 and 70B are wrong:
  700 sets A = (R1)-(R0) and is used to compare two bytes
      the carry flag is set iff (R0)<=(R1)
  70B compares two little endian 16bit values: ((R0),(R0-1)) and ((R1),(R1-1))
      R0 and R1 are preserved
      A is not preserved
      the zero flag is set iff the two values are the same
      the carry flag is set iff ((R0),(R0-1)) <= ((R1),(R1-1))

3) At line 008D the comment should be changed to the following:

  "start timer @11MHz/32/15 -> 1 tick/43.6 usec -> timeout=11.2 ms"

  (as ZN said in a previous mail, the 11MHz frequency is divided by 15 to
get the
  instruction clock frequency, which divided by 32 results in the timer tick
  frequency)

Hope this may help authors of QL emulators.


Ciao,
Daniele Terdina


15.  PARAMATER PASSING

Parameter passing - any non-expression parameters are passed by 
 reference, that is, changing the value in the proc or fn changes the 
 value of the real variable outside the routine. The exception to this 
 is Supercharge compilation. Supercharge does not support reference 
 parameter passing as I think I recall Simon Goodwin mentioning that 
 reference parameters were not fully documented at the time.

Turbo can work either way - if you explicitly make the parameter(s) 
 reference ones by using the REFERENCE keyword, they will behave as in 
 SuperBASIC.

If you wish to prevent values being passed by reference for any reason, 
 just put the parameter in brackets to force it to be passed by value 
 instead, thus preventing the value changes from affceting the original 
 variables.

If you have problems remembering which type of parameter is which, the 
 reference parameter is the one that can be changed. Ones passed by 
 value effectively calculate the result of the expression and create a 
 new copy of the parameter, which is not passed back after the proc/fn 
 finishes.

Example of reference parameter passing. This routine merely doubles the 
 value of the variable passed to the routine, in this case "num":

LET num = 1
DOUBLE num
PRINT num
STOP
DEFine PROCedure DOUBLE (n1)
LET n1=2*n1
END DEFine

That should print 2, whereas this next one (pass by value) still prints 
 1 - the value of num has not been doubled in this case.

LET num = 1
DOUBLE (num)
PRINT num
STOP
DEFine PROCedure DOUBLE (n1)
LET n1=2*n1
END DEFine

The line which says DOUBLE (num) has forced the parameter to be a 
 pass-by-value type by converting it to an expression by putting it in 
 brackets, or it could equally well be a line like this, which would 
 achieve the same thing: DOUBLE num+0

In this sort of routine, passing by value is useless as you want the 
 variable "num" to be changed when the program returns from the 
 routine. Reference parameters can be useful in recursive routines, for 
 example, where you want to change the value of parameters on return 
 from each depth of recursion. Equally, reference can be a pain if you 
 specifically don't want the value changed on return!

KEYWORDS BOOK - While I welcome this book as an all-singing all dancing 
 guide to BASIC on the QL and SMSQ and everything else, I shudder at 
 the thought of supplying it in Text 87 format, for the simple reason 
 that no two users have the fonts and drivers set up in the same way. 
 What will print nicely on Roy Wood/Rich Mellors printers would 
 probably screw up completely on mine unless I happen to be using the 
 same driver as him. And you rarely get it to look the same on screen 
 as printed anyway - when was the last time you saw right justify look 
 as perfect on screen as on paper in Text 87, since it's almost 
 impossible to get a display font with exactly the same pitch as the 
 printed ones. For electronic copies, you'd have to either include the 
 necessary software to view and print it, or tell the users where to 
 get free copies of the necessary software, or give it in a format 
 everyone will have and where the display/print problems will be 
 minimal (i.e. Quill/Xchange doc file or plain text).

Roy and Rich would be welcome to have the sources to my Text File 
 Viewer and File Finder programs to make an electronic search/display 
 program for a plain text version if they wish.

Dilwyn Jones
dilwyn.jones@bbc.co.uk


16. QDOS FILE HEADERS

>how do QLay and Q-emuLator recognise QDOS headers, that is:
>- what is the format of a file qlay.dir
>(I expect it to be equal to the format of a QDOS mdv directory,
>am I right ?)
>- what is the format of the header Q-emuLator adds before
>non data(type 0) files ?

I think that uqlx and QLay both store the header in a separate file.
Q-emuLator for Windows stores part of the header at the beginning of files.
The header is present only when it is useful, i.e. only if it contains
non-default information.


The header has the following format:

OFFSET  LEN     CONTENT
0       18      "]!QDOS File Header"
18      1       0 (reserved)
19      1       length of header, in 16 bit words, including trailing info
20      lenght_of_header*2-20   QDOS INFO

The first 18 bytes are there to detect whether the header is present (ID
string).
The headers I support can be 30 bytes or 44 bytes long. In the first case,
there are 10 bytes with the values present in bytes 4 to 13 of the 64 bytes
QDOS header. In the second case the same piece of information is followed
by 14 bytes containing a microdrive sector header, useful for emulating
microdrive protection schemes.
Additional header information (length, name, dates) is obtained directly
from the file through the host file system.

Some QL programs to translate between QDOS and Q-emuLator for Windows file
formats are included in the Q-emuLator package.

The translation is automatically performed when you move files between QDOS
floppy disks and Windows directories through Q-emuLator.


The Mac version of Q-emuLator uses still another scheme: QDOS information
is stored in the file's resource fork. (On Macintosh all files have both a
data fork - corresponding to a Windows, DOS or UNIX sequential file - and a
resorce fork, containing structured information called 'resources').


Daniele Terdina


17.  SEMAPHORES

Joachim Van der Auwera wrote:

> 
> I repeat, what would be best and in the SMSQ spirit would be to have
> semaphores with a timeout mechanisms which is provided by the OS !
> 

Let us discuss this again as I believe that this is THE fundamental
feature of QDOS/SMSQ/Stella which makes it "better".

First a little historic background.

I am in contact with Tony Tebby since september 1992. At that time I was
disappointed that QDOS would die, so I visited him with the idea that
there is maybe another possibility to market QDOS than in a home
computer (ever saw a numerically controlled machine tool?).

At that time he gave me some documents about a new SMS3 system. But when
I reread these documents now, I feel what I already felt then: he was
not really ready. Furthermore after the Gulf War the market was
depressed, and I concluded that it was too risky to try something by
then.

However we continued 2 things:

 - me to try to imagine marketing solutions (during my spare time),
 - TT to develop his ideas (when he had spare time).

And I have the big privilege to have at least once a year about 8 hours
of time to discuss thouroughly all this with him (when we travel from
Paris to Eindhoven).

If you read the QDOS manual dated 1984, you read:

<<
2.2.3. Atomic Routines
In general system calls are treated as atomic: while one job is in
supervisor mode, no other job in the system can take over the processor.
This provides for resource table protection without the need for complex
procedures using semaphores.
>>

In SMS3 documents that TT wrote about 5 years ago, he simply states that
he never had the use for semaphores, but also that they may be
implemented if they really were needed.

About semaphores with timeout: TT came and visited the 1993 Real Time
System Show in Paris and looked at a programmer's manual for a real time
kernel named MCPC from a very small French company (I had found this
manual very similar to a QDOS programming manual and that is how I began
to dream about a possible future for QDOS in this branch), and he
immediately told me that this is, in his opinion, one of the most clever
system he had seen: it had semaphores WITH TIMEOUT. So he knows of
course...

HOWEVER,

As you know TT has also been teaching computing science in a French
school for 2 years.

At that occasion, as he had to teach the use of semaphores (Argh), I
think that he looked more deeply into that subject and that he
eventually became convinced that using the academic concept of
semaphores to protect shared resources is even more dangerous than he
thought until then.

Here is what he wrote in a document about Stella dated October 97:

<<
Contention for resources

In principle, computer systems are predictable. In practice it can be
very difficult to predict the behaviour of a system if certain
algorithms are used. There are two approaches to this problem.

The most common approach is to pretend that it does not exist. For
example, the leading suppliers of semaphore based "real-time" systems
all claim that all operating system calls have well-defined execution
times and quote them. However, since many of these calls access
resources protected by semaphores, the time to access those resources
depends on whether they are already in use by another task. In general
these suppliers claim "worst case" figures for these calls that are less
than the time taken for a "best case" semaphore ping (i.e. the figures
are false).

There are two defences of this "optimistic" performance evaluation.

 1. Contention is a rare occurrence.

 2. The time taken for a semaphore ping depends on how long the other
task will take before it releases the semaphore and, therefore, cannot
be evaluated.

The first justification is specious. If there were no possibility of
contention, there would be no need for protection. If contention is
possible, it must be taken into account for the worst case. The second
justification only reveals that these suppliers are well aware that they
cannot even define the worst case performance of their systems.

Using semaphores renders the performance of a system intrinsically
unpredictable as the execution time any function accessing a shared
resource depends on the internal state of the system. Moreover, even if
semaphore deadlocks are avoided, it is possible that several critical
resources may be in use by low priority tasks when required by high
priority tasks. In this case the high priority tasks may be delayed by a
cascade of semaphore pings.

This is not the most serious problem. The probability of a particular
resource being in use when required rises rapidly as the load on the
system rises. This causes the average operating system efficiency to
drop dramatically with increasing the load, which has the same effect on
the processor as further increasing the load. The result can be a fold
back in performance that reduces the system overload point by more than
an order of magnitude with respect to the "normal" overload point
established during system tests.

This non-linear behaviour means that statistical techniques can not be
used to predict the probability of overload and system collapse.

The Stella approach is rather different. No mutual exclusion mechanisms
are used within Stella so that the execution time of every operating
system call is independant of when the call is made. Proprietary INTSAFE
asynchronous operations are used for inter-task communications. More
complex operations are divided into sequences of INTSAFE and atomic
operations.

Contrary to current academic dogma, the use of atomic operations does
not give extended interrupt latency or excessive pre-emption delays as
semaphores themselves require atomic operations internally. With
guaranteed interrupt latencies of 6 "instructions" and pre-emption
delays of "25" instructions Stella outperforms the leading semaphore
based systems under the most favourable conditions. Unlike semaphore
delays, however, delays due to atomic operations can not accumulate or
cascade. Under unfavourable, heavy load conditions, Stella outperforms
classical systems, not by "percents", but by orders of magnitude.
>>

There are a series of other very clever mechanisms described in this
presentation of Stella. However in my opinion the answer to "Why is SMSQ
better" is given in this short text by TT. 

The bad news is that these concepts seem to be not clear enough, even in
the minds of the best QL programmers still around (this was the sense of
"Remember your QL?", of course not a "Hello Linus").

Furthermore, millions of software engineers working around the world use
bad tools based on lazy concepts formalised in a certain M. Dijkstra's
book back in 1965 and made popular in 1968 by certain Kernighan, Ritchie
et all. And this seems to work (thanks MMU which allows to crash
processes without crashing whole systems).

However I am convinced that there is a path (and maybe more than one
path) into the future. 

So now there are some questions.

What do you think about this "semaphore forbidden" philosophy?

Would some of the remaining QL users help to try something?

Is this all still QL related?

This mailing list is a particularly useful tool for such a brainstorming
which may lead to NUL, without making big announces in QUANTA or QL
Today (we do not need a new QLAW fiasco). And thank you very much to the
one who organised the mailing list.

Sincerely
arnould.nazarian@hol.fr

PS. Please tell me if you wish more. Feel free to comment. This is the
sense of this discussion.

I expected this question of course.

Stella is something like SMS4 or QDOS Version 4 but without graphical
user interface or file system. It is all the basic routines needed by a
very modern OS, I think very optimised, for a possible future use in a
workstation or else.

 - Who: Tony Tebby alone
 - When: during the last (7 to 10) years during spare time
 - Where: in France
 - Whither: (I don't understand)
 - Whether: (I don't understand)
 - Whence: (I don't understant)
 - Wherefore: for any possible application needing a better OS (modular:
a workstation would not have the same system routines than a Anti
Braking System)
 - Why: I think because it is very difficult to develop SMSQ/E into the
right direction due to compatibility problems, even if some ideas also
flow into SMSQ/E.
 - What: the core routines needed by a modern OS like

memory management
schedulers (different versions for different applications)
entity management, an entity being something (not a Thing) known by the
OS and usable by all other entities, eg. jobs, device drivers, memory
pools, simple code, simple data, etc.
routines to link different device drivers
external event manager
handlers (new name for tasks)
self cleaning routines (not like garbage collection in Java!)
...

everything written in a modular way (the user loads only the needed
modules, but after boot up the system works very integrated with
possible direct links between different resources).

But again no file system or graphical user interface yet or device
drivers, even if the new screen driver is written with Stella in mind.

This all CAN work because there is no need to worry about response
delays or deadlocks due to the use of semaphores...

So one author and no user as of today.

Sincerely
arnould.nazarian@hol.fr

BTW I dislike the name 'Stella'. Imagine what you can find when you
start a search in the Internet or Dejanews with 'Stella'. If one do this
with 'QL', or 'SMSQ', or 'Superbasic', usually one finds what one is
looking for. StellaOS could be better, but there must be better than
that.

18. PROCESSES & THREADS
 
From: Thierry Godefroy <godefroy@imaginet.fr>
Subject: Re: [ql-users] Why is QDOS/SMSQ better?

At 11:16 17/05/98 +0200, Atnould Nazarian wrote:

> .../... (in Unix a process is something like a QL machine, and a thread
>something like a QDOS job so link many QLs with a high speed network or
>"pipes" and you have smthg like Unix).

NO !  The UNIX "process" equivalent is the QDOS/SMS "job".

We do not need the "thread" concept in QDOS/SMS because in fact these
"threads" are still jobs. In UNIX you cannot create a new "process",
you can just duplicate (fork() call) an existing one or overwrite
(exec() call) it. Under QDOS any job can create as many jobs it wants
without having to duplicate and then overwrite a "process". This is why the
"thread" concept is needed in UNIX and not in QDOS. Threading is making some
part of a program (usually a function) to multitask with the program itself;
i.e. to create a sort of "process" out of an already running "process" (this
is not possible with fork() and exec() UNIX calls, thus the threads)
Example of "threads" under QDOS: ACP creates child jobs (watchdog, Item List
daemon, etc...) which would be called threads under UNIX.

Thierry (godefroy@imaginet.fr).

19. INTERRUPTS

From: Thierry Godefroy <godefroy@imaginet.fr>
Subject: Re: [ql-users] Remember your QL? (Very long!)

At 10:34 17/05/98 +0200, Arnould Nazarian wrote:

>For the ones interested, there is a good introduction to these problems
>in Tannenbaum's book about Operating Systems.
>.../...
>The solution to disable interrupts to perform some action on
>shared/exclusive resource is the first one explained (page about 60).
>But it is not developped at all because the author says a job could
>"forget" to restore interrupts leading to system loss. And that is the
>_only_ reason he gives to write that this not a usable solution!
>
>Now that is what QDOS does when it enters a atomic routine. 

NO !!!  QDOS/SMS does NOT disable interrupt when entering an "atomic"
routine (re-read my 03/05/98 message to this list about it). When
supervisor mode is entered, the scheduler just exits without doing
any job switching, but IRQ2/5/7 are still enabled and the scheduler
is still entered 50 times per second (60 on JSU, 71 on SMS2), the
polled interrupt tasks are also always performed.

Thierry.

From: "Per-Erik =?ISO-8859-1?Q?Forss=E9n" ?= <perfo897@student.liu.se>
Subject: Re: [ql-users] Why is QDOS/SMSQ better?

Marcel Kilgus wrote:
> 
> > In the QDOS manual, it is written that a user job may disable
> > interrupts, (ie under QDOS go into supervisor mode, is this the
> > same?).
> 
> No, normally interrupts are still enabled in supervisor mode.

To disable interrupts you must:

1. Enter supervisor mode. ie
    TRAP    #0

2. Set the interrupt mask to NMI (Level 7)
    ORI.W   #$0700,SR

   This is a priviliged operation, this is why you have to
   enter supervisor mode first.

This is bad practise, but it was common among the early QL-games
to do this. In effect it disables QDOS, and gives complete control
to the current application. Only a few of the OS calls still work
afterwards.

/Per-Erik


20. DEADLOCKS

From: A Halliwell <u5a77@uga.keele.ac.uk>

|
|Thierry Godefroy wrote:
|>
|No offense, but many of the postings on deadlocks here have sounded a
|bit weird. Please keep this discussion going, as it seems almost as
|if we are talking about different things.
|
|Does everyone agree with me on this:
|A deadlock is when two or more processes (jobs) are waiting
|indefinitely for resources held by each other.

A better way of putting it would be...
"A deadlock is when two jobs hold resources that are required by the other.
Neither job will release its resources until it has completed its task, and
thus, they will wait forever for the resources to free..."

Replace INDEFINATELY with INFINITELY in yours, and it's right.

|This prevents data structure corruption as you say, but certainly
|not deadlocks. Atomic system calls is a good idea, but it does not do
|away with deadlocks.

I think the ONLY things that can do away with deadlocks permanently is
either the client/server option or TIMEOUT....

(The client server option is basically like this....)

   Client A                                Server
   -----                                   ------
   |   |           Client B                |    |
   |   |           -----                   |    |
   |   |           |   |                   |    |
   -----           |   |                   |    |
                   |   |                   |    |
   Client C        -----                   |    |
   -----                                   ------
   |   |
   |   |
   |   |
   -----

o  Whenever a resource is required, a client talks to the server.
o  The server can never initiate a conversation between client and server.
o  If the server doesn't hold the resource, it can then query another client
   for the information (the conversation was still initiated by the first
   client).
o  If the server must ask for information from another server, then it follows
   all the rules as a client.
o  Only one client may talk to a server at once.
o  All client/server interactions WILL have a definite timeout situation.
o  The server must never allow a situation where a loop is formed when talking
   to other servers. 


           (i.e. Server A >> Server B >> Server C >> Server A. )

Even though this was originally designed for networks, the client/server
model could actually be used for different programs.

i.e. One program handles all resources for a specific area. (Call it a
daemon... I like that term). Lots of other programs want various things from 
it. They open a pipe to the daemon, and the daemon then appears to be 'in 
use'. It then queries the program that the information is available from, 
and waits for a set length of time to receive it. Then it passes it back to 
the original program, or calls a time out, sending the timeout signal to the 
querying program. It must then try again, and the next job in the queue gets 
its go....

I *think* this is proof against deadlock. (Remembering that deadlocks last
for infinity or until manual release/killing jobs/switch off).


21. NUMBER OF DEVICE NAMES

From: Phil Borman <pb@pborman.demon.co.uk>
Subject: Re: [ql-users] New facilities

Marcel Kilgus wrote:
> 
> > May be we should also remove the limitation of 8 physical device per
> driver
> > (I hardly have more than 3 microdrive, 4 floppy, but upto the 8 ide
> partition
> > [and I would have more, if possible...] and now about 6 QXL.WIN file (may
> be
> > more later !).
> 
> The real problem is IMO that QDOS cannot handle more than 8 (I think it was
> 8) drives (every once accessed drive counts, even a DIR's enough),
> regardless of their name.
> 
> Marcel

The limit is 8 drives per device (win1_ to win8_ for example)
and sixteen drives in total, so if you use all 8 win drives and 8
ramdiscs, you can't access your floppy disc as all 16 drive slots are
full ;-(

Phil.

From: <GPlavec@aol.com>
Subject: Re: [ql-users] New facilities

 > The limit is 8 drives per device (win1_ to win8_ for example)
 > and sixteen drives in total, so if you use all 8 win drives and 8
 > ramdiscs, you can't access your floppy disc as all 16 drive slots are
 > full ;-(

Yes but only as long as the files are still open on all devices (I seldom have
16 files open on 16 devices at a time) else you can use DEL_DEFB to free some
unused slots.

Grard Plavec - GPlavec@aol.com


22. DEVICE DRIVERS

From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] New facilities

Marcel Kilgus wrote:
> 
> > Oh, and need I remind you that about 80% of the win and hdk (and to a
> > large percentage even the flp) driver is the same? Why waste time on
> > writing it when it could be shared code!
> 
> In SMSQ it is shared code.
>
Yes, but the layers are not available to the user. I have seen some
documentation about DV3 where hooks are made into layers of the drivers
so that similar type drivers can share the same code. The point is, many
of those layers could be drivers in their own right and thus new drivers
could be constructed by just connecting them together. unlike DV3 hooks,
this could even be done from SBasic.
> 
> > Unless you already have something called win and want to use it. xxx_USE
> > must be one of the most dirty commands there is - it is S(uper)basic
> > command code that searches the OS device tables (which it has no
> > business searching in the first place) and modifies the name of the xxx
> 
> Only the "working name".
>
Which it doesn't 'own' anyway and has no business tampering with. Some
early xxx_USE were quite problematic - say FLP use wold change RAM if
RAM_USE FLP was done before it. Of course, what is really needed is an
'alias' device but then you have to be careful about the maximum of 16
devices with channels opened to them...
>
>> If drivers were
>> layered, the 36 character thing would have been simpler to sort out,
> 
> I don't know exactly what you mean with layered, but I think DV3 is what you want.
>
Almost. It would be nice if the device itself, on it's 'top' layer -
meaning the one viewed by all components in the system, had a means to
have configuration data for it read and written, amongst other data the
set up of what the next layer in the makeup of that device is, would be
the most useful.

When I say layered, I mean distinguishing between layers of 'protocol'
translation imposed on the data to make the data as it appears to the OS
become something which the hardware can handle.

For instance, in a flp device, you have two levels at least. The lowest
is hardware access. The top of this layer is sector access, it operates
by reading and writing sectors, and setting up the hardware to be able
to read and write sectors in the first place. It also needs to know
which instance of the hardware it is to access - like which floppy
connected to the one controller chip this layer interfaces to. The next
layer is packetizing data into sectors, i.e. file access on the level of
a single file and organisation of files into directories. This layer
accepts standard OS calls for file access and translates them into
simpler, sector access operations, which are performed by the lowest
level. Of course, this file access layer can be split into more layers -
for instance, file access at the level of a single file would be one,
and then the directory organisation would be another. Even a topmost
layer could be added (or more of them) which decides on 'routing' - in
essence, what the device is called by the OS.
The gain in this case is, that with changes in hardware, only the
hardware layer is changed. As long as it can operate with sectors, it
will be transparent, regardless of the fact the hardware might be
completely different. Also, if you want a new file system, you change
the top layer. Or, in the example where the top layer is split into two,
while the files themselves stay the same, but say, the directory
structure changes (/ instead of _), you again change only the layer that
handles that. The nice thing is, the layer that handles stuff like that
is one and the same for all devices that are capable of file access -
hence, changeing that layer, automatically updates all devices. The way
files are packaged into sectors and directories are formed might be the
same for all file devices, hence those layers would be common and the
code would be shared. Also, sometimes a single hardware device has to be
shared by several logical devices of a different nature - like the SCSI
example we had a couple of days ago. The hardware layer cates for the
articular hardware expansion card, but once that is done, we gain file
access almost automatically, and if we want to have different devices
off the SCSI, our hardware layer recognises SCSI IDs as different
physical devices - then we just have to connect the different higher
level devices to different hardware layer devices and we have clean
access by many SMSQ/E devices over one single piece of hardware. It is
all about levels of data abstraction. And, it solves a lot of problems
all in one blow.

Devices with only the routing level implemented are especially
important. They would be able to represent any directory or file on any
other device as yet another device. Of course, routing devices are much
more flexible and can handle a lot of stuff traditionally done by
xxx_USE but also, with extensions (yet other layers), handle filtering,
networking and network protocols, file systems conversions on fly, etc -
or just simple things like aliasing a directory somewhere to temporarily
be flp2_ in a single floppy system.
To do this, the devices need a means to send them configuration data.
Also, the higher layer devices need to 'inherit' all the operations of
the lower layers - in effect, they can be bypassed. Even the lowest
level (hardware access) can concievable be 'bypassed' to access the
hardware location by location. 
With those capabilities, it is possible to, say, produce a device which
is called win1_ but keeps it's sectors in a file on another device, like
the QXL.WIN. This device doesn't really have a true hardware access
layer - it uses other devices to handle that. The flexibility of setting
up this device would allow you to use any file on any device as your
QXL.WIN. Of course, this is just a simple example.

Nasta

From: <GPlavec@aol.com>
Subject: Re: [ql-users] New facilities

<< > > Unless you already have something called win and want to use it.
xxx_USE
 > > must be one of the most dirty commands there is - it is S(uper)basic
 > > command code that searches the OS device tables (which it has no
 > > business searching in the first place) and modifies the name of the xxx
 > 
 > Only the "working name".
 >
 Which it doesn't 'own' anyway and has no business tampering with. Some
 early xxx_USE were quite problematic - say FLP use wold change RAM if
 RAM_USE FLP was done before it. Of course, what is really needed is an
 'alias' device but then you have to be careful about the maximum of 16
 devices with channels opened to them... >>

Please take care, there are different things which may not be mixed up, here:

1) FLP_USE, RAM_USE, MDV_USE, WIN_USE, etc. which change the name of device
drivers in the OS and have to be used carefully because changes are made to
the whole system (including all jobs).
--> FLP_USE can be very usefull if you have - for example - 2 interfaces with
FLP, after FLP_USE NIX you can use the first one using NIX1..8 and the second
one using FLP1..8
--> If you change FLP to NIX - without having 2 interfaces with FLP - then
FLP1..8 no longer will be found by any job or the system, but you can use
NIX1..8
--> if you had FLP1_ and FLP2_ in the device-open-list(max.16), these are
automatically changed to NIX1_ and NIX2_ by the system !!! (I tried it just
yet) No disadvantage in this case.

2) DATA_USE, PROG_USE, DEST_USE, DDOWN, DUP, etc only fixes default drives
(unfortunately for the whole system and not for each job separately - I think
it would be a good idea, if each job could ask for his JOB_PATH$: from where
he was started, and have an own DATA_PATH where he have to save his data -
hello TT ;-)
--> if you have DATA_USE WIN1_TOOLS_GERT_ you can use WIN1_ etc without
problems...
--> no more entries in device-open-list(max.16) by using DATA_USE etc.

3) NFS_USE, DEV_USE, XDEV, etc? only change the opening file-path to another
one, for the whole system.
--> but only the "physical device" is put in the device-open-list(max.16).
DEV_USE 1, FLP1_ : DIR DEV1_
DEV_USE 2, FLP1_ : DIR DEV2_ and you have only FLP1_ in the list.
--> use servers and you can use much more than 16 devices !!!
DIR N5_WIN1_
DIR N5_RAM7_
DIR N5_FLP3_ and you have only N5_ in the list.
If your QL has - for example - "NET 36", then 8 QLnet-servers with 16 devices
each can be attached and you can use 8 more devices attached directly to your
own QL, or 8 more servers (with 16 devices each) using SERnet or MIDInet. Do
you really need more?

In my opignon the "device-open-list(max.16)" is used to manage the slave
blocks

Grard Plavec - GPlavec@aol.com


From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] New facilities

GPlavec@aol.com wrote:
> 
> --> if you had FLP1_ and FLP2_ in the device-open-list(max.16), these are
> automatically changed to NIX1_ and NIX2_ by the system !!! (I tried it just
> yet) No disadvantage in this case.
>
Of course, that's because the name of the device is changed, the
pointers still point to the same places :-)
>
> 2) DATA_USE, PROG_USE, DEST_USE, DDOWN, DUP, etc only fixes default drives
> (unfortunately for the whole system and not for each job separately - I think
> it would be a good idea, if each job could ask for his JOB_PATH$: from where
> he was started, and have an own DATA_PATH where he have to save his data -
> hello TT ;-)
>
I agree completely. In fact, there should be a way to define
inheritance.
For instance, when job 1 starts job 2, job 2 inherits the defaults from
job 1 if no other parameters are given when job 2 is started. This means
that the defaults of job 1 are copied to job 2, not that job 1 literally
uses the defaults of job 1, please note.
Other parametes should enable inheriting global defaults on job start
(very useful for starting jobs on remote machines) and also, not
inheriting, but specifying the defaults. Of course, each job should be
able to read and set it's own defaults, whatever they are.

> --> if you have DATA_USE WIN1_TOOLS_GERT_ you can use WIN1_ etc without
> problems...
> --> no more entries in device-open-list(max.16) by using DATA_USE etc.
 
> 3) NFS_USE, DEV_USE, XDEV, etc? only change the opening file-path to another
> one, for the whole system.
> --> but only the "physical device" is put in the device-open-list(max.16).

Yes, you have pointed that out and I agree that I was in error - but
that is in another mail.

> If your QL has - for example - "NET 36", then 8 QLnet-servers with 16 devices
> each can be attached and you can use 8 more devices attached directly to your
> own QL, or 8 more servers (with 16 devices each) using SERnet or MIDInet. Do
> you really need more?

No, we need more on the single machine. Of course, this would not be a
problem if the networked 'slaves' were on a fast network, and they were
cheap and they could be configured remotely. As it is, what I'm
proposing is almost like some kind of virtual networking, but some
connections can also be physical. Because it's an unified model of
connecting and defining devices, weather the actual devices and
connections are virtual or physical doesn't really matter on the
application end - i.e. the system is really IO independent.
 
> In my opignon the "device-open-list(max.16)" is used to manage the slave
> blocks

Unless you want to use your own slaving algotithms and tables.

Nasta


From: <GPlavec@aol.com>
Subject: Re: [ql-users] New facilities

<< > If your QL has - for example - "NET 36", then 8 QLnet-servers with 16
devices
 > each can be attached and you can use 8 more devices attached directly to
your
 > own QL, or 8 more servers (with 16 devices each) using SERnet or MIDInet.
Do
 > you really need more?
 
 No, we need more on the single machine. >>

I do not think so. As I said in another mail, 16 is only the limit of
different physical "slave block"-devices on which you have channels open at
the same time. What we (perhaps) really need is a background job making
DEL_DEFB automatically, when list reaches 14, 15 or 16 items. I think
programing that would not be a real problem.
--> But onely people having often problems with this limit might start such a
job (not me).
--> People using more than 8 harddisks better get a RAID-driven system (all
harddisks are driven as only one)

<< > In my opignon the "device-open-list(max.16)" is used to manage the slave
blocks

 Unless you want to use your own slaving algotithms and tables. >>

--> Before you realize drivers like "URL", "FTP", "NFS" or "TCP/IP" or others
you have to decide, if you want to use the QL-slave-block-system (then of
course, but only then, you have the 16 devices limitation) but if you prefere
use your own slaving algorithms and tables or nothing like that, then you must
program and use a non-directory driver (like CON, SER, PAR, MEM, PIPE, etc)
and you are free to do what ever you want and have no given limitation by the
OS.

<< Of course, this would not be a problem if the networked 'slaves' were on a
fast network, and they were cheap and they could be configured remotely. >>

Then I think USB will be better than Ethernet...

<< As it is, what I'm proposing is almost like some kind of virtual
networking, but some connections can also be physical. Because it's an unified
model of connecting and defining devices, weather the actual devices and
connections are virtual or physical doesn't really matter on the application
end - i.e. the system is really IO independent. >>

In the moment I still cannot understand anything concrete by this. Let us make
an example:
We assume an USB with keyboard, mouse, ZIP, Syquest, soundboard, printer,
monitor, micro and speakers, scanner, more QLs but also PCs, Macs and
ansering-voice-modem with FAX and connection to an internet-provider...

May be there are simpler examples ;-))

I think we first will need some slave-block-drivers for ZIP and Syquest,
floppies and harddisks on PCs and Macs (may be with a TCP/IP-Server or NETBEUI
for PCs and/or MacLAN on Macs) and we probably will use a rewritten Nx_-driver
for the QLs (with FSERVE started).
Then we need some "normal" drivers for keyboard, mouse, printer, scanner and
the modem. I don't yet know how monitor, micro and speakers would be driven,
but probably like voice-modems.
And we need some drivers for FAX, voice-ansering and internet through the
modem-channel...

The problem is, we have only one hardware IO-port and a lot of very different
devices and protocols and no coprocessor (with a FSERVE) for help to manage
all that.
The modem seems to be particularly complicated to realize because for internet
you need a lot of channels but FAX and VOICE have to be exclusive

The simplest way should be to have QPC or QXL on a PC with USB and all the
devices and drivers attached on it, then the QL connected with SERnet and
using Sx_...

But why QPC or QXL cannot - for example - use a scanner or a video-digitizer
attached to the PC ? No more a MIDI-interface, a soundboard or a simple
JOYSTICK...

The only interest in USB on QLs would probably be to have a cheap pretty fast
network between QLs.

Grard Plavec - GPlavec@aol.com


From: Richard Zidlicky <rdzidlic@cip.informatik.uni-erlangen.de>
Subject: Re: [ql-users] New facilities

> 
> 
> << > > 2) DATA_USE, PROG_USE, DEST_USE, DDOWN, DUP, etc only fixes default
> drives
>  > > (unfortunately for the whole system and not for each job separately - I
> think
>  > > it would be a good idea, if each job could ask for his JOB_PATH$: from
> where
>  > > he was started, and have an own DATA_PATH where he have to save his data
> -
>  > > hello TT ;-)
>  > >
>  > I agree completely. In fact, there should be a way to define
>  > inheritance.
>  > For instance, when job 1 starts job 2, job 2 inherits the defaults from
>  > job 1 if no other parameters are given when job 2 is started. This means
>  > that the defaults of job 1 are copied to job 2, not that job 1 literally
>  > uses the defaults of job 1, please note.
>  
>  fortunately this is what c68 progs do on default. AFAIK c68 does
>  pass the environment on stack (extended TK2 'ex' stack format)
>  So any environment can be passed to child jobs - for me this
>  is good enough :-)  >>
> 
> Unfortunately c68 is not a part of the OS. So neither SBasic nor
> assemblerprograms have this features.

The TK2 parameter passing for 'exe' programms is not part of the OS
either but many programs use it.
Conceptually the environment is not different from program parameters
so I think the c68 approach is fine. In the case that the environment
is not explicitly passed there is however considerable overhead getting
the environment from all TK2 and Basic variables together - this could
be simplified using a thing or shared library.

Bye
Richard


23. SCREEN DRIVERS AND LOADING SCREENS

From: Thierry Godefroy <godefroy@imaginet.fr>
Subject: Re: [ql-users] HI- RESOLUTION SCREENS??

At 15:28 21/06/98 +0100, Roy Wood wrote:

>I am having a little problem in getting some of my Software to work
>with Aurora and QPC running SMSQ/E.  I appreciate that the
>software may require the screen display to be set to 512x256
>resolution with the command DISP_SIZE 512,256; but even with
>this, not all software will work correctly.
>
>The problem seems to depend on the circumstances and must have
>somethng to do with the way in which the higher resolution drivers
>are implemented....
>
>Try the following small progam:
>
>1 MODE 8
>2 DISP_SIZE 512,256
>3 LBYTES flp1_title_scr,131072
>
> .../...
>
>LBYTES flp1_title_scr,SCR_BASE does not seem to help either!!

Arghhh !!!  This is VERY BAD practice !!!  You should NEVER poke (or
LBYTES) into the screen memory directly: you must do it through the
screen driver. Screen drivers are aware of the screen parameters:
base, width, height, line length in bytes, available MODEs, etc...
Your program is not, and even if you succeed into making it aware of
any screen geometry available right now, your program will still
fail with future drivers !

What you should do is to load your title_scr into the heap, poke
a "pic" header prefix just before it and then use the IOP.RSPW
(partial window restore) TRAP of PTR_GEN extended screen driver
(if you got QPtr or EasyPtr you may also use the equivalent S*BASIC
keywords).

The "pic" header is as follow:

ds.l      1            room for link (not to be saved in "pic" files)
dc.w      $4AFC        magic word
dc.w      Xsize        number of pixel per line
dc.w      Ysize        number of lines
dc.w      Xlen         number of bytes per line
dc.w      Mode         screen MODE (0, 2 (SMS2), 8, 12 (Thor XVI)...)

QDOS/SMS forever !

Thierry (godefroy@imaginet.fr).


From: "PWitte" <pjwitte@knoware.nl>
Subject: Re: [ql-users] HI- RESOLUTION SCREENS??

hi y'all!

Roy Wood wrote:

>>
I am having a little problem in getting some of my Software to work
with Aurora and QPC running SMSQ/E.
[...]
The problem seems to depend on the circumstances and must have
somethng to do with the way in which the higher resolution drivers
are implemented....

Try the following small progam:

1 MODE 8
2 DISP_SIZE 512,256
3 LBYTES flp1_title_scr,131072

now, this sometimes works on Aurora (normally if lines 1 and 2 are
swapped over), but sometimes only draws every other line of the
display (the intermediate lines are drawn at the right hand side of
the display as can be seen if you use the command DISP_SIZE
600,480 afterwards).
[...]
>>

No, the above doesn't work, due to differences in the memory layout. You
need something like SCR_LBYTES to achieve it.

Usage:

er = SCR_LBYTES(#channel; xpos, ypos, <filename>)
(Tested on QXL2 & QPC. Modified for email (1080, 1085))
<- - - - - - - - - - - - - - - - - - - - - - - - - - ->

1000 REMark SCR_LBYTES Version M.01
1010 REMark      PWitte 1998
1020 REMark  Use/Abuse at own risk!
1030 REMark
1040 REMark   SBASIC, MODE 4 only
1050 :
1060 DEFine FuNction SCR_LBYTES(wc, xpix, ypix, fnm$)
1070 LOCal ad, ch, fl, sl, sx, sy
1080 IF (xpix + 512) > SCR_XLIM(#wc):RETurn -4
1085 IF (ypix + 256) > SCR_YLIM(#wc):RETurn -4
1090 ch = FTEST(fnm$):IF ch < 0:RETurn ch
1100 fl = FLEN(\fnm$):IF fl = 0 OR fl > 2^15:RETurn -15
1110 ad = ALCHP(fl):IF ad = 0:ad = -3
1120 IF ad < 0:RETurn ad
1130 LBYTES fnm$,ad
1140 sx = (xpix DIV 8) * 2
1150 sy = SCR_BASE(#wc) + (ypix * SCR_LLEN(#wc))
1160 FOR sl = 0 TO fl - 128 STEP 128
1170  POKE$ sy + sx, PEEK$(ad + sl; 128)
1180  sy = sy + SCR_LLEN(#wc)
1190 END FOR sl
1200 RECHP ad:RETurn 0
1210 END DEFine
1220 :

Better convert all scr-files to the PI format and modify the above program,
for example, to cope with that.

          Per
--
pjwitte@knoware.nl

From: "PWitte" <pjwitte@knoware.nl>
Subject: Re: [ql-users] HI- RESOLUTION SCREENS??

On the topic of SCR_LBYTES, I've included an "exerciser":

(Tested on QXL2 & QPC. Modified for email (1080, 1085))

<- - - - - - - - - - - - - - -  - - - - - - - - - - - - - - ->
1 REMark SCR_LBYTES tester
2 REMark Usage : EX <dev> ScrLbytes_bas
3 REMark Prereq: SBASIC, (PI, QMENU V7.04), MODE 4
4 :
100 JOB_NAME 'ScrLbytes'
110 :
120 bc=FOPEN("con"):ERT bc
130 cx=SCR_XLIM(#bc)/2
140 cy=SCR_YLIM(#bc)/2
150 scrfnm$='win2_qfax_eg_Worm_scr'
160 scrdir$='win2_qfax_eg_' :rem <- not updated by FILE_SELECT
170 scrext$='_scr'          :rem <-  as expected
180 er = 0
190 :
200 REPeat main
210  OUTL#bc;500,240,cx-250,cy-120,6,6:CLS#bc
220  scrfnm$=FILE_SELECT$('ScrLbytes',scrfnm$,scrdir$,scrext$)
230  IF scrfnm$='':QUIT
240  OUTL#bc;SCR_XLIM(#bc),SCR_YLIM(#bc),0,0:CLS#bc
250  er = SCR_LBYTES(#bc;cx-256,cy-128,scrfnm$)
260  DoOrDie er
270  BEEP 1000,2:PAUSE#bc
280 END REPeat main
290 :
300 DEFine PROCedure DoOrDie(er)
310 LOCal re
320 IF er>=0:RETurn
330 re = FILE_ERROR(er,,cx-100,cy-30,1)
340 SELect ON re
350  =-1:REPORT_ERROR er,cx-90,cy-30,1:QUIT
360  =0 :QUIT
370  =1 :NEXT main
380 END SELect
390 END DEFine
400 :
1000 REMark SCR_LBYTES Version M.01
1010 REMark      PWitte 1998
1020 REMark  Use/Abuse at own risk!
1030 REMark
1040 REMark     SBASIC, MODE 4/8
1050 :
1060 DEFine FuNction SCR_LBYTES(wc, xpix, ypix, fnm$)
1070 LOCal ad, ch, fl, sl, sx, sy
1080 IF (xpix + 512) > SCR_XLIM(#wc):RETurn -4
1085 IF (ypix + 256) > SCR_YLIM(#wc):RETurn -4
1090 ch = FTEST(fnm$):IF ch < 0:RETurn ch
1100 fl = FLEN(\fnm$):IF fl = 0 OR fl > 2^15:RETurn -15
1110 ad = ALCHP(fl):IF ad = 0:ad = -3
1120 IF ad < 0:RETurn ad
1130 LBYTES fnm$,ad
1140 sx = (xpix DIV 8) * 2
1150 sy = SCR_BASE(#wc) + (ypix * SCR_LLEN(#wc))
1160 FOR sl = 0 TO fl - 128 STEP 128
1170  POKE$ sy + sx, PEEK$(ad + sl; 128)
1180  sy = sy + SCR_LLEN(#wc)
1190 END FOR sl
1200 RECHP ad:RETurn 0
1210 END DEFine
1220 :
<- - - - - - - - - - - - - - -  - - - - - - - - - - - - - - ->

          Per
--
pjwitte@knoware.nl


From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] HI- RESOLUTION SCREENS??

Roy Wood wrote:
> 
>>I am having a little problem in getting some of my Software to work
>>with Aurora and QPC running SMSQ/E.
>
OK, just to get to the bottom of this.

The Aurora will gladly emulate the old screen if it's set up in mode 4,
any resolution. However, attempts to use the old screen and the new one
at the same time will have problems, because this is not emulation any
more. The Aurora was in fact designed to try to cope with this, but the
way the SGC works makes it impossible to implement copying of the old
screen area to the new as well as the new to the old. Only the first
works, which is necessary for emulation.

The reason for this is, the SGC tracks any access to the old screen
area. If one occurs, the data stored into the screen will also be stored
in the SGC memory which is located at the same address. Storing any
value in old screen memory automatically copies it into the propper
place in the new. hi-res screen memory on the Aurora, so the user can
view it, i.e. old screen compatibility is maintained.

When bytes are read from the old screen memory addresses, due to the way
SGC shadows them (that's the part where the bytes get written both to
the screen and the SGC RAm as explained above), it actually does not
read from the screen memory at all, but from it's copy in the SGC RAM.
It does this because reading the copy is about 12 times faster than
reading the actual screen memory.

When something is written to the new screen memory in the area that
corresponds to where the old screen would appear (top lefthand 512x256
pixel area), the SGC doesn't know about this and does not make a copy in
it's RAM. Hence, when you try to read the old screen area, the changes
done by writing the new screen area will not be there - because the SGC
does not physically read them.

It is impossible to make the SGC actually read the old screen area, as
far as I know. If it could, the propper state would be found there.

Because of this, an application that does direct screen eccess (read:
games) has to access the screen either only through the new, or through
the old screen area, not both of them at once.

Nasta


24.  NEW FILING SYSTEM

From: "PWitte" <pjwitte@knoware.nl>
Subject: Re: [ql-users] New facilities

The simplest and most transparent way, in my mind, to enhance the old QL
filing system would be to abandon the old 36 character limit, by storing
only the relevant section of a pathname/filename in the directory. That
way we could have a potentially unlimited directory depth, provided
_each name section_ were no longer than 36 chars. This would upset most
file-manipulation programs like QRAM, QPAC2, Cuehell, and many of the
older ones, plus countless home-grown utils (sniff') But most of these
are alive and kicking, and would no doubt be upgraded by their
designers. However, almost all other programs should remain unaffected!
The directory mechanism itself could be a DEVICE; 'DIR' or 'CAT' (or
even more than one device, eg 'ROOT', 'CD' (Current Directory), (the
reasons for this having evolved out of these on-line discussions). That
way you avoid having any absolute pointers (thanks P-E!) by using the
channel ID as a handle instead. The sort of functionality I could
imagine would go (here in S*BASIC):

dc = FOP_DIR               :rem defaults to home dir for this BASIC
CD[#dc;] myfiles           :rem default changed! (note [] = optional)
VIEW readme                :rem transparent; works as before
EX myjob\'win1_temp_'      :rem start myjob with given home directory
                           :rem (see below, and earlier mails)

Unless you specify otherwise, the new job gets its own home directory
channel, quite separate from the owner's. Any changes it makes do not
affect the owner job:

EX (all the old specs!)    :rem set up job with a COPY of our home dir
EX ()\                     :rem set up job using our ACTUAL home dir
EX ()\''                   :rem set up job old-style (for tricky progs)
EX ()\'win1_myjob_dir_'    :rem you got it!

Instead of my previous suggestion of putting the whole default directory
name on to the job's stack (not suitable for long filenames), the
channel ID is passed to the job (in a register?). From there we can get
its name and other details. Of course the name might not be so
convenient to manipulate, but then the whole pathname would never really
be needed: To open a given file the system would follow the trail via
the directory device (no point in storing the names more than once -
keep them where they are: in the (buffered) directory files!) - each
containing only it's own (max 36 char) section of the path name, until
it arrives at target. The whole file name need never be all in one
place! I/O access must be modified to take its default channel from the
channel ID rather than the name, eg

sdir = PROGD               :rem get default directory as channel ID
OPEN#3;[#sdir!]filename_ext:rem open filename_ext prefixing the
                           :rem calling job's default directory
DIR#2;#sdir                :rem do a DIR of given directory
PRINT FNAME$(#sdir)        :rem works as before

IOA.OPEN and similar could take a new parameter:

Call parameter  : a2 = home dir channel ID
Return parameter: a2   preserved

in addition, a flag should be set, eg in the open-key, to validate the
new parameter.

Why a2 -?! I hear you howl in dismay: For exactly the same reason as I
gave when suggesting a0 before :) Also, if you look at register usage,
eg when calling Special jobs - where else would you put it? As for the
rest, since we are talking about a new facility, with system-wide
utility and implications, we should not be too dismayed at a change in
the familiar landscape. Anyway, any effects on existing programs could
probably be minimised. It would be up to the OS to figure out an
efficient and transparent way of doing it, with defaults that emulate
the old way of working, eg PROGD$/DATAD$ could remain unaffected. The
old-style directories would simply remain an "officially recognised"
subdirectory under the new system. Better: there would be some kind of
aliasing system (for QDOS, DOS, VFAT, ISO9660 :) that would allow
old-style names to be mapped onto new-style.

ASSIGN 'QWA,0 > SCSI,0';params :rem Connect QWA access layer to SCSI
                               :rem ID 0 physical layer, as device QWA0:
ALIAS 'win1_','QWA0:\olddir\'  :rem Use existing directory olddir as
                               :rem old system root

Or better: (This idea lifted from Unix (somewhat bastardised here)):

ASSIGN '\devices\System\' TO 'QWA',0 TO 'SCSI',0;option_list

Our QWA0 disc would then be addressed as \devices\System The "real" name
would then be \devices\System\olddir\boot. Old style programs would
merely see win1_boot as usual by

ALIAS 'win1_','\devices\System\olddir\'
PROG_USE win1_

Once the ALIAS code was written it would not be too much more effort to
use the same mechanisms to automatically 'alias' "illegal" filenames
from foreign systems to ones acceptable to our system. Again, a
ready-made implementation is not needed - nice as it would be, of course
:) - merely the "hooks" so others could "easily" add new file naming
conventions to the existing ones.

The directory mechanism might be managed via new TRAP #2 calls which
would support operations such as:

IOD.CD     change directory
IOD.SET    set directory to new path (only up to calling job's home dir)
IOD.NEXT   return handle/pointer to next file header in a directory (to
           traverse the directory tree - various navigation options :)

Odds'n ends:
------------
Shortcuts/Virtual links - ie one file, many names:

Current implementations (in other OSes) often do:

    name:  Fred.lnk   Oscar.doc  Lary.lnk
    file:  link --->  original  <--- link

    If Oscar is deleted Fred and Lary go too (or are left hanging)

Better might be:

    Fred.doc --->  original.sys  <--- Lary.doc
                       ^
                       |
                    Oscar.doc

    Fred, Oscar & Lary may each have different file attributes, as each
    filename is an "original". (The file statistics being stored in our
    (currently useless) file header instead of in the directory file.)
    Only when the last link pointing to the file is deleted is the
    original itself deleted (warning given?)


Trash/Undelete:
On DELETE flag file as deleted. New files do not overwrite deleted files
until virgin disk space is exhausted, only then are deleted file
overwritten - oldest first.

REMark key: | = OR, [] = optional
UNDEL filename | pattern [,from_date [,to_date]]

also:

REMark Unrecoverable file delete (scribbles random pattern over FABs)
SHRED filename!

..for those who still wish to tempt fate.

Multi SMSQ/E?
File locking mechanisms should write their flags out to disk, or find
some way of telling their hosts what's going on, to enable multiple
copies of SMSQ/E to access the same resources simultaneously: SMSQ/E
should be made SMSQ/E-aware!

Buffered pipe?
A fifth OPEN key for "WORM" access (Write: One, Read: Many)


Jobs
----
Changes to the file system, will as we've already seen, affect Jobs - not
the man of Apple fame, but our little work-horses. While we're at it, it
would be worth considering the following:

Jobs should be given the option to quit gracefully. This could be
achieved in different ways, a suggestion only: (Refer to the Bible,
sections 3.1 Jobs, and 3.5 Special Programs for base reference)

<- - - - - - - - - - - - - - - - -  - - - - - - - - - - - - - - - - - ->

Job State at Start-up:
---------------------

Lo memory addresses
^
|
            -------------------- - - - - start Job Header ........ (BAS)
  (a6)  n   jmp.l start     .6 goto start of code
        n   dc.w $4AFB      .w magic
        n   dc.w 3,'JOB '   .s job's name
        s   dc.w $4AFB      .w validates rest of extended header (or ? )
        s   dc.l sbasic     => sbasic m/c pre-processing code (or 0)
        *   dc.l quit0-*    -> shutdown as best you can! code (or 0)
        *   dc.l quit1-*    -> shutdown gracefully! code (or 0)
        *   dc.l event-*    -> event handler (or 0)
        *   dc.l query-*    -> query job (or 0)

(key: n = old style normal, s = old style special, * = new)

*
start
           --------------------- - - - - start of Code Area ...... (SCA)
            on entry:
            n: (a6,a4)->DAT, (a6,a5)->TOP (a6)-> BAS, (a7)->JSP
            s: as above
            *: as above + a2 = home dir channel ID
            all other registers are zeroed (as before)

            # # #  job code # # #
            --------------------- - - - - end of Code Area ....... (ECA)

            
  (a6,a4)   ===================== - - - - bottom of Data Area .... (DAT)
            $ $ $  job data $ $ $
  (a7)      --------------------- - - - - Job's stack pointer .... (JSP)
            .w number of channel IDs
            .l channel ID no 1
            .l.l.l..
            .l channel ID no n
            .s command string 
  (a6,a5)   ===================== - - - - top of Job Stack ....... (TOP)

|
v
Hi memory addresses

<- - - - - - - - - - - - - - - - -  - - - - - - - - - - - - - - - - - ->

Quit gracefully:
A job that wanted to implement it could supply code to be called when
the job is to be shut down:

* quit0: Quit as best you can! Code tries to save data and quit without
user interaction. If call fails or is not implemented by the job
sms.frjb is called instead.
* quit1: Quit gracefully! Exits whatever a job is currently doing and
offers the user the option to save any unsaved information (if required)
before exiting. If call fails or is not implemented by the job sms.frjb
is called instead. Should have a timeout.
* event: This is a job's personal interrupt server :) When an event
occurs that effects this job or all jobs (whether it is waiting for an
event or not) the job's current status is saved on the stack and the job
enters the appropriate event handling code - in user or supervisor mode,
depending on the event.(eg, please resize your window to WHaXY or, Go to
sleep!)
* query: A job may wish to provide some information about itself and its
internal status. The nature and structure of the information and how it
is to be used is up to the job itself (and should be documented if
third parties are to use it) Some standard queries, though could be
pre-defined (perhaps using system utility code to retrieve it):

#0: Version?
#1: Default directory?
### (add your own suggestions)?
#n: Unsaved data?

Just an idea...


Implication scenario check list for jobs, for those too tired to read
all the above:

1) old EX running old job     => + no problem, as before
2) old EX running new job     => - may not work properly
3) new EX running old job     => + no problem, transparent
4) new EX running new job     => + no problem, better!
5) old EX running old special => + no problem, as before
6) old EX running new special => - may not work properly
7) new EX running old special => ! gotcha! Maybe $4AFB -> $4AFC -?! :\
8) new EX running new special => + no problem, better!

In other words: Anyone wanting to run NEW programs would have to
upgrade EX also. (The #7 issue above can be worked around - this is
merely a sketch!) Apart from that, everything should work better!

      Per
--
pjwitte@knoware.nl


25. NEW FACILITIES

From: Richard Zidlicky <rdzidlic@cip.informatik.uni-erlangen.de>
Subject: Re: [ql-users] New facilities

Hello,

> Looking briefly through your comments (so far) I believe we lost contact at
> around the place I suggested the directory services might be a device. You
> thought it was
> 
> RZ>a good idea, much easier than modifying
> RZ>trap#2 calls.
> 
> But what actually appeares to have happened was that the concept suggested a
> totally different approach to you than it did to me. That would not be a bad
> thing at all! (Provided we eventually did understand each other :)

I must admit I was intentionally interpreting your text the way I liked it.
Perhaps it is possible to think of it in more abstarct terms where the 
differences don't yet matter?

> My hypothosis is that basically, we all want more or less the same
> end-result - or
> only a limited number different end-results. The problem is that this gets
> multiplyed by all the different angles we view the issue from: the "general
> user", whoever s/he is (compatible, easy-to-use); the professional
> (hassle-free) programmer (easy to work with, bug-free, well documented), who
> may have clients (minimal disruption, cost-effective, powerful) to maintain;
> the Tinkerer, who once traced through a CTRL+ALT+7 back in the late
> eighties, and
> never quite recovered.. (2^32+1 options). Your angle is different if your
> world is C (register usage, parameter passing, the availability of Jolt!
> cola), or if you merely abuse BASIC on Saturday nights (FOR i=0 TO10:PRINT
> i:NEXT i). Whether you are an ACERTY banger, or a QUERTY (I can NEVER
> remember how to spell them :) hunt'n peck type-o, you will have a different
> angle.
> 
> :| Those different angles need to be seen! We should have as complete
> a PICTURE of the situation as possible.

yes, but I am in favor of splitting it into smaller pieces. There is
so much to do and we can't do everything at once.
Perhaps it helps a bit if we can agree what the problems are.

The main problems as I see it are:

	1. environment passing. Some see it as problem while I claim that
	   c68 has an adequate and working mechanism that can be extended
	   to fullfill most requirements.

	2. filesystem. Many subproblems here:
	
		2.1 too short filenames
		2.2 ambiguity of '_', strange subdirectory concept
		2.3 implementation constraints of QDOS. Apart from the
		    problems listed here, this appears more a psychological 
		    problem though.
		2.4 nonexistent separation of filesystem and raw-device
		    concept. This makes development of new drivers almost
		    a superhuman activity.
		2.5 missing loop devices - or Nasta's metadevices as far as
		    I understand them
		2.6 missing soft and hard links etcetcetcetc

	3. Home/Deafult etc directories. I think this could be reduced
	   to (1) if (2.1) is solved.  

	4. missing TCP/IP. Someone has to implement this, even doing it for
	   UQLX is nontrivial.

	5. GUI. I find PE pretty good, much more intuitive than other GUI I use
	   or not. But it is a bit difficult to program, too difficult for me.

What are the most pressing problems? What did I forget?

For me 2.1 and 2.4-5 are the most important problems because they involve
defining new concepts. Almost everything else can be programmed around
some way, provided you find someone willing to program it around. 
	

Bye
Richard


From: <GPlavec@aol.com>
Subject: Re: [ql-users] New facilities

ZN: << Another consideration, of course, is the cost of moving to a different
system. The simplest way with a special dir separator would be to have a new
dir set up and the old files copied to it. They would then lose the subdir
structure, but no programs would have to be modified at all. The system you
propose is much more elegant because no changes at all would have to be made.
Some programs like QPAC2 Files, Cueshell etc would have to be made aware of
the change (path not stored in directory). And there would be repercussions to
the IOA.CNAM trap (see below). However, the hidden cost might be what I
outlined above. Also, there is one more cost to do with the way a file is open
but that one cannot be avoided unless the path and file name stay together
(but then
 headers change and this is a MAJOR problem). >>

I don't think so, because the problem exists only if you create a new
directory using a part of the name of the path of a file you open (and perhaps
close) just at this time. Before and afterwards no problem "test_my_file" =
"test_my_file" and as I already said in another eMail, the problem is solved
by the system we have yet.

ZN: << That problem is that in order to open a file, you have to scan
directories opening them down the directory tree, as specified in the path.
With a larger name length limit, this just means more directories might have
to be open. In practice, this almost invariably means it's highly desirable to
have them cached. IIRC you expressed a dislike for this, and I agree,
provisionally. You can always cache only reads (i.e. a write through cache
with check integrity on write).

??? excuse me, perhaps I am too narrow-minded to understand ?

ZN: << Yes, except for the problem of modifying headers as I outlined it
above. It does not exist if the directories are unambiguously delimited. I am
not saying this is a better system, just easyer to implement. >>

Oh, you mean, if I want to change the name of a subdirectory while a file is
open ?
Yes, I agree... we have to be carefully, when we write this routine!

until now, when I wanted to change:
..._test_my_file
..._test_my_letter
to
..._text_my_file
..._text_my_letter
I used (seldom)
WREN "..._test_" to "..._text_"
if a file was open then I got the error: IN USE
and if "text" was there before as subdirectory: no problem, because QDOS/SMS*
don't make any difference.

As RENAME, as it is yet, cannot be used to do this job (change the name of a
subdirectory), it might be rewritten (then if any file is open, an error must
be returned) or this service simply would be avoided.
I do not know what UNIX or M$DOS do if you try to rename a subdirectory while
files are open, but I don't think the problem is easyer if directories are
unambiguously delimited.

GP: > c) IMHO we have all we need for global defaults and nothing for job-
defaults (of course we yet have no) >
ZN: << Once you do have a choice between defaults, you have to have a way to
choose. The traditional (note the word I'm using) way is to have a special
character or character combination to handle this, just like DOS uses . for
home dir, .. for one level up. Of course, the down side is as I explained
above wher eyou discuss how all characters are legal in filenames. >>

As I said, this is an generally used but absolutely unorthodox ugly way. I
would prefer
DATA_USE, DATAD$, PROG_USE, PROG$, DUP and DDOWN for global default (onely EX
and EW from main SBASIC passes DATAD$ and PROG$ to the called job) and perhaps
HOME_USE, HOME$, PATH_USE, PATH$, JUP and JDOWN for local job default (all
other jobs and SBASICs passes HOME$ ans PATH$ to their daugther jobs)

ZN: << Oh, yes. I forgot to mention that.
 Your solution to the name length almost implies deafult directory passing for
compatibility, because in order to use an old program unaware that names are
longer , in a deep subdir, means you have to automatically pass a default dir
(so it will get appended to the <=36 char name the old program uses), which
also implies either a special character (for 'implying' the default) or a
special case of EX. The alternative solution would be to alias a directory to
a device and use that device for compatibility. >>

I would solve this problem like that:

old programm command: OPEN#6,"MDV1_letter"
this "letter" would be found in the new system - for example at:
win4_gert's.private.files_forgotten.very.old.stuff_old.ql.drives.copy_mdv.536/
876_letter
then
DEV
1,"win4_gert's.private.files_forgotten.very.old.stuff_old.ql.drives.copy_mdv.5
36/876_"
or if win4_ is on QL#5 then
NFS_USE
mdv,n5_win4_gert's.private.files_forgotten.very.old.stuff_old.ql.drives.copy_m
dv.536/876_
will be a solution, provided that DEV and NFS_USE are adapted to the new
system >36 characters

Grard Plavec - GPlavec@aol.com


26. FILE NAME PARSER

From: "PWitte" <pjwitte@knoware.nl>
Subject: Re: [ql-users] New facilities (Filename Parsing (YALL))

Joachim Van der Auwera writes (re how to tell the difference between a
filename and a dirctory):

JvdA>As I have mentioned before, this is not difficult.  Naturally, you will
have to open the file first (no other way to assure a valid name is passed).
This is provided for in syslib. Source code can be made available on
request.

Presumably in C? Well, here's an SBASIC one for illustration. (Somewhat
simplified, and updated from SuperBASIC. This version not for networks.)

1 CLS
2 PRINT,'(Simplyfied) Filename Parser'
3 REMark      PWitte 1998
4 PRINT,!!!!!'PD - No Warranties'!!!!!
5 :
6 dfnm$='win1_bas_util_fnm_ParseFnm_bas'
7 er=ParseFnm(dfnm$,ddev$,ddir$,dnm$,dext$)
8
PRINT\\'Fnm:'!dfnm$\\'Dev:'!ddev$\'Dir:'!ddir$\'Nme:'!dnm$\'Ext:'!dext$\'Err
:'!er
9 STOP
10 :
32724 DEFine FuNction ParseFnm(f$,v$,d$,n$,e$)
32725 LOCal c,t,p%,i%
32726 REMark Split filename into components
32727 c=FOP_DIR(f$):IF c<0:RETurn c
32728 d$=FNAME$(#c):CLOSE#c
32729 IF LEN(d$) THEN
32730  p%=d$ INSTR f$:IF p%=0:RETurn -7
32731  d$=d$&'_'
32732 ELSE
32733  p%=('_' INSTR f$)+1
32734 END IF
32735 v$=REMV$(p%,LEN(f$),f$)
32736 IF LEN(v$)<3:RETurn -12
32737 IF p%+LEN(d$)=LEN(f$) THEN
32738  n$='':e$=''
32739 ELSE
32740  n$=REMV$(1,p%+LEN(d$)-1,f$)
32741  p%=0
32742  FOR i%=LEN(n$) TO 1 STEP -1
32743   IF n$(i%) INSTR '_.':p%=i%:EXIT i%
32744  END FOR i%
32745  IF p%=0 THEN
32746   e$=''
32747  ELSE
32748   e$=REMV$(0,p%-1,n$)
32749   n$=REMV$(p%,99,n$)
32750  END IF
32751 END IF
32752 RETurn 0
32753 END DEFine
32754 :
32755 DEFine FuNction REMV$(from%,to%,str$)
32756 IF from% < 2 THEN
32757  IF to% >= LEN(str$):RETurn ''
32758  RETurn str$(to% + 1 TO LEN(str$))
32759 END IF
32760 IF to% >= LEN(str$) THEN
32761  RETurn str$(1 TO from% - 1)
32762 ELSE
32763  RETurn str$(1 TO from% - 1) & str$(to% + 1 TO LEN(str$))
32764 END IF
32765 END DEFine
32766 :

       Per
--
pjwitte@knoware.nl


27. MORE FILE SYSTEM STUFF

From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] New facilities

Roy Wood wrote:
> 
> Will someone please give a coherent answer...to the following questions:
> 1: Why do you want to hide the path from the filename?
>
You are confusing what the user sees and how it is actually stored on
the disc under the current FS. The reason for the name length limit is
that the FS stores the path to the file into the file name slot in the
file header, together with the file name.
If you have a file 'file' in directory 'directory' then the directory
name in the directory header is 'directory' but the file name in the
file header is 'directory_file'. (An asside - a directory is just a
special type of file and it also has the same type of header as other
files).
The path to the file should tell the FS in which directory to find the
file, nothing more. For the above example, the file header should only
contain 'file'. An added bonus is that the _ used as directory
separators are not stored.
>From the user standpoint, the file would still be referenced with or
without the path just as it is now, only the maximum length could be
more, except for Problem (1), see below.
>
> 2: What advantages does this give you ?
>
While maintaining as much compatibility as possible, the maximum name
length is more than the current 36 characters. Besides, the path has
really nothing to do with the file in the first place. If it had, then
no-one would use defaults - we'd just type in the whole name starting
with win1_etc_etc_etc every time, and, shouldn't then the device name be
there too?
>
> 3: Why do you want to use more than 36 characters in a name ?
>
NOT in a name, but in a path+name. I think there will be very little
complaints about a file name (not path+file as it currently is) being
limited to 36 characters.
What is being proposed is that each file or directory or subdirectory
has a name length limit of 36 characters, and that it is not cumulative
as it is now. Currently, although files/directories may be up to 36
characters in length, a path specification with a filename appended to
it is also limited to 36 characters. The only thing we really want to
change is this last bit.
There are several advantages:

1) With care a relatively obvious directory structure could be created
by the user - the whole point of a directory structure is to create a
hierarchy of files in a directory tree, with the directory names
associative of the directory contents.
One cannot expect filenames to abide by a particular users 'rules' -
that user is not the only one, nor does his logic of naming necessairly
conform with someone elses, nor does that someone else (who just might
be a supplier of a program you want) have to cater for all of the users
preferences, one user at a time. In fact, in the later case the files
will probably correspond more to the needs of the program. So, either
the programmer has to make everything extremely configurable (remember,
most users don't even know how to write a QPAC boot!), or the user would
have to rename all the files to his liking (Again, most don't know how
to do a QPAC boot!), or, the user creates a directory for this
particular program, and moves the whole thing with it's own structure
into this new directory. You tell me which is simpler.

2) Do you do any networking at all? Well, a network route (which is in
current systems rather simple - n1_ etc, but still 3 characters!) is
under SMS* a part of the filename (which is BTW also a bit inconsistent,
but on the right path). Keep in mind that we already have n1_, s1_ and
there will probably be a e1_ too, and if you want to use TCP/IP for
local networking (and there is no good reason not to!) who knows how
many. With SMS* working on so many different computers, we migh need a
couple of 'hops' from one type of the net to another - and we get to
encroach on the name length once again.
>
> I have never had a problem with any of this except when trying to
> access these massive ported 'C' programs which fall over if one file
> is in the wrong subdirectory. Do we need this and does the Amiga
> TCP/IP implementation that Frank Davis mentioned have the same
> type of file naming conventing.
>
I'll get to this in a moment. Incidentally, the Amiga doesn't have a 36
character name length limit. Neither do any of the systems we claim SMS*
to be better than.

3) Offers better compatibility to other systems. This MUST be stressed
but also tempered. IMHO where it helps the general state of the
community and subtracts nothing from the simplicity and efficiency of
SMS*, compatibility with other systems is a VERY welcome bonus. We
cannot expect 'others' to conform to our rules - especially not if we
intend to challenge 'their' rules and maintain ours are better. WE are
not the ones who will judge this, but the people comparing our with
other systems. Now, before someone starts shouting treason and
blasphemy, this only means we give as little as we possibly can, and
gain as much as we possibly can. The 'massive ported C programs' and
'windows\something\something\even\more\' or
'http://www.whatever.com/~even.more.whatevers' is what you find outside
of this community and want it or not - giving in that small bit is FAR
cheaper then reinventing everything all over again to our own particular
liking, in fact the later we CANT afford at all.
On one hand, people are clamoring 'we want what the PC can do too' and
on the other the keep whining that they don't want the consequences of
having what they want. Well, face it - you have to take the good with
the bad, Just like when you buy a steak - you try to get a good butcher
who will cut off the unwanted fat, so you get more meat in a pound of
what you buy. (My excuses to vegetarians).

To recapitulate:

WHAT IS PROPOSED
================

Removing the 'path' part of names out of the header, so that the 36
character name length limit is not cumulative any more. Files/dirs will
still have that limit, but paths+filenames will not.

THE PROBLEMS
============

1) Problem: The OS limits any string specified by the user as a
path+filename to 36 characters by chopping of the rest, if the device is
a 'directory' device. Normally, the limit is the length of a SMS* string
(32767 characters) minus some characters for the name of the device.
Remedy: either change the chopping off action, or use 'simple' devices
which do not chop anything off a name.
Consequence: Drivers will have to be modified. In the later case of the
above remedy, old drivers could be left in place untill new are made,
and would operate as usual.

2) Problem: The name in the file header will change. Some programs that
rely on this (file managers like QPAC2 files and similar) MAY (note!)
not work correctly. Same goes for some commands like DIR, WDIR, possibly
the default setting commands.
Remedy: Modifications will have to be done to said programs.
Consequence: Rather insignifficant, except for the time=money utilised
to fo this!

3) Problem: Paths might become very long, better management is required.
Immediate problem is compatibility.
Remedy: Loads of possibilities. One is implementing or extending
compatibility devices much like DEV or SUB or PTH. Arguably, there are
better ways that do this which also cater for other things, on which new
versions of DEV/SUB/PTH could be based.
Consequence: Known already with programs that don't understand
directories - the compatibility devices will have to be used, or
programs modified.

DEBATABLES
==========
1) directory separators.
The whole furore about this is not so much weather it's going to be UNIX
or M$ or whatever compatible because it is frankly, not a huge issue -
in porting programs this is probably a minor problem. The issue is that
it makes things simpler (but not decisively simpler) to code. There are
ways to implement the above proposal regardless of which particular
separator will become most favoured in the end.

2) Default directory inheriting/setup.
This is one of the solutions to problem (3). The main advantage of this
solution is that it caters for problem (3) and gets us another useful
feature too. But as this is another proposal in itself, it has it's
problems too, which will have to be weighed out. The catch is, if we are
modifying parts of the OS, then it might be feasible and cost effective
to incorporate this too. Unfortunately, this also recurses to Debatable
(1) - special characters in names which imply setting or changeing parts
of the default path.

ASIDES
======

All this has great relevance to the concept of layering of devices. We
have a problem with writing device drivers under SMS*. The idea of
layering is to be able to treat devices, drivers or parts of drivers as
black boxes and combine them into other drivers. In practise, one of the
ways a layered structure can be specified is much like specifying a path
or a network route - currently that approach would also limit the
layering specification to a 36 character limit. Layering also applies to
drivers that we would nees to implement our initial proposal, so there
is a small viscious circle there.

GAINS
=====
1) 36 character limit is no longer a problem. This reflects to better
directory naming flexibility, improved networking, improved
compatibility with other systems or concepts found in them that we wish
to integrate into our own system.
2) Compatibility is maintained as much as possible (in fact, with SOME
sacrifices to comnpatibility all this would probably already be
resolved, only we must alow only minimum compatibility to be sacrificed)
3) New features might be added - default directory handling, means to
build device layering into SMS* in the future, aliasing/linking,
hierarchical drivers, multiple file systems, all that becomes much
easyer to do.

This is turning into a very long email surpassing even my own record but
that's because I'm just lumping answers to several letters together:

RW:
>
>>Try unzipping Lynx into a directory called lynx for example....
>>Not possible. The best you can manage is lx....
>
> I did and it worked. trouble was that LYNX didn't. This is an example
> of the problem. The original author wrote these long file names and
> nested directories
>
Because they sounded logical to him and his system didn't have
limitations like ours
>
> and the person who ported it left it the same
> without thinking that it was to run on a system that did not support
> them. This is the problem.
>
And it's going to remain a problem, and you have no right really
complaining about it because the porting in the first place was an act
of kindness. Just like conforming better to your particular way of
naming things would be, since Lynx does seem to work on some other
people's SMS* systems, the person who ported it made reasonable
assumptions and requirements on what the names should be considering he
does not get payed for it. Your only recourse is to kindly ask for him
to change them if he wants, nothing more. And, I might add, while
considering that there might be a point in the future when stuff will
not be ported at all because it might not be worth the effort of making
it conform to our limitations.

RW:
>> BOOT
>> SMSQ ->
>> PROGRAMS ->
>> DOCUMENTS ->
>>
>> Things would be easy to find!

> Well I prefer them the other way  and I do not think they would be
> easier to find ! The Windows sub-directory structure goes 
> C:/PROGAMS/Blah/more blah/even more blah
> so if I am looking for a file and I can't quite remember which
> directory it is in I have to follow that path down to the end and then
> retrace my steps back and try another subdirectory etc.

Which is exactly what you have to do on the QL too - except if you do
'tree' in QPAC2 files. Besides, this is no argument at all. You could
have created a windows directory structure just the same as you would on
the QL but you are not willing to make that kind of effort (and I don't
blame you). The point is, YOU COULD and the windows subdirectory is
created automatically. The problem with the windows directory is not the
way it is, but that windows discourages you completely from changeing
it. Let's not confuse issues.
Further, 'Well I prefer it the other way' is no argument at all - or no
better argument than 'Well, I don't' and I don't. Just like other people
might have their own idea about all of this.
The fact of the matter is, because you need to cater for the needs of a
lot of different people, in the end you might find there is no way to go
but implemet the largest common denominator of all their reasonable
wants and needs.

> I tend these days to put individual things starting from the root
> because it is quicker.

I agree with you but I have already come to several cases where my own
logic which is usually max 2 nested levels and sometimes 3 failed due to
the length limit. I don't see this situation getting any better! Just a
quick example: suppose you do your invoices on the QL and keep the
copies on the disc. Well, you might start with one directory 'invoices'
but after a while, you might end up with additional subdirs for each
year and further subdirs for each month in a year - these things have a
way of multiplying!

> The example of the pbox files is also a case in point because what is
> a partition except just another subdiectory that does not use up
> any of the 36 letter file name limit?

No it isn't and that's a problem, because logically, it should be, which
would actually make the 36 char thing even worse. There is really no
reason why we should have only 8 partitions just like there is no real
reason for just 36 characters, but somewhere a line has to be drawn. As
I see it, some of the lines have been drawn rather unrealistically. The
whole idea is not to provide unlimited everything, jut to provide a
reasonable upper limit, then multiply by 10 for the future, and insure
there isn't a catch that undermines your efforts (like the path+filename
thing we have now).
The 8 partitions is (still barely) a reasonable enough limit IF you
don't use it to avoid another limit like the 36 characters. I'll leave
the fact that this shouldn't be connected in any way in the first place
for another discussion.

>> directory under the win1_pbox_ directory.
> This is a problem with the program then.

No doubt done to make the setup a bit simpler - besides, isn't it
logical that parts of pbox reside in subdirectories of it's own
directory?

>> some programs are designed to work with the default directory, they don't 
>> have the choice.
> 
> Then they are not well written. Not that I am a programmer - just a
> user. 
> I must be going for the Nasta award for how many emails I reply to
> this week - hes been strangely quiet for a couple of days.......

That's because he has this thing called a job :-)
However, I am tempted to answer this with acid on top:
Next thing you will be asking is, will Quill work with the new system,
and you won't take 'it's not well written' for an answer.
One man's argument is another's counter-argument. I'm tempted to put you
and Gerard in the same room to debate these things :-)

Nasta


28. AURORA COLORS

From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] col drivers

alfeng@juno.com wrote:
> 
> Could you fill me in as to how many colors people want?
> How much (dedicated?) memory is going to be allocated?
> ...
> I don't have an Aurora card, so I don't have a clue as to how MODE 8 is
> currently handled.
> My observations suggest (perhaps errantly) that simply generating the
> smaller font sets in MODE 8 [harder to do, I would think], or
> re-assigning the MODE 8 colors to MODEs 0 & 4 might be all that needs to
> be done to create a reasonable palette of colors.  Is that TOO naive?
>
A. Halliwell wrote:
>
> Aurora is said to support 256 colours.
>
It does. I have seen several people mention 'supports resolutions up to
1024x768 and up to 256 colours' but you should be warned that this is
not simulataneous. Due to the hardware limitations of the SGC, the
Aurora only has 240k of dedicated display RAM. That is 1966080 bits. The
maximum number of pixels displayed therefore depends on the number of
bits per pixel. In ractice, in mode 0, the actual bitmap is 1024x960 (It
is theoretically possible to make all of it appear on the screen, but
even on a very good monitor it has to be interlaced and doesn't look
very good, so it was limited to the standard 1024x768), in mode 16 (16
colours) it's 1024x480 and in mode 256 (256 colours) it's 512x480 -
which are also the maximum resolutions in the given modes.
Basically, resolution is selected from 8 presets - however, some will
result in less than the preset if the size of the bitmap is less than
the preset. In practice, if you have a good enough monitor this is what
you will get:

Preset = mode 0 |mode 16	|mode 256
----------------+---------------+--------------
512 x 256	|512 x 256	|512 x 256
512 x 384	|512 x 384	|512 x 384
640 x 320	|640 x 320	|512 x 320
640 x 480	|640 x 480	|512 x 480
768 x 384	|768 x 384	|512 x 384
768 x 576	|768 x 480	|512 x 480
1024 x 512	|1024 x 480	|512 x 480
1024 x 768	|1024 x 480	|512 x 480

The monitor you have may also limit the resolution to some extent. Most
modern SVGA monitors will, however display all of the above.

> Yes, but those are still using pixels of the 4 fixed colours.
> If you stippled using the Auroras 256 colours, then some amazing things
> could be done.

Yes. Even in the 16 colour mode which still gives an acceptable
resolution, stippling can be employed for some rather startling effects.
Unfortunately, other than my own test programs in Sbasic, no other
programs can use that yet.

> On standard QL hardware, resolution and number of colours are fixed. There
> is no way of changing them via software.

Ditto for Aurora, I'm afraid. Mode 4 and 8 have the usual QL colours,
Mode 16 and 256  have a fixed set of 16 and 256 colours, respectively.
There are several reasons for this - the card is cheaper without a
dedicated pallette chip, the pallette chip only supports analog monitors
(meaning older QL monitors would not work) and with the chip, there
would have been no way to insure compatibility right from the moment you
switch the power on - essentially, there would have to be a program to
load the initial colour values, and if anything went wrong with it, the
user wouldn't be able to see anything on the screen, because initially,
all the colours are defined as black.

> That's where the Aurora come into it own. SMSQ/E for Aurora already has
> extended resolutions of a maximum of 768x1024 (more than double normal mode
> 4)

6 times, actually :-)

> I imagine the resolution is enhanced for [Mode 8] as well...
> (I *think* the flash bit has been lost, or at least, the use of it.)

Yes. Mode 8 is treated exactly as before regarding resolution - it's
half the horisontal resolution of the selected mode 4. You are also
correct about the flash bit, it is maintained in memory but has no
function - i.e. pixels don't flash.
An 'extended' mode 8 was not used for 16 colour because it's bit to
pixel mapping isn't really very elegant - and proved impossible to fit
into the logic chips along with all the other options. In the 16 colour
mode, one byte defines two pixels, one in the bottom, and the other in
the top 4 bits. In the 256 colour mode, one byte defines one pixel.

Nasta


From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] col drivers

A Halliwell wrote:
> 
> [Aurora modes and resolutions]
> 
> I want it even more now!
> Go on, be a devil. Release the control codes so all us hobbyists can play
> with the colours before the drivers come out....
>
I have been thinking about that.
It has a nice plus, which is, people will get familiar with the way it
does things.
It has a horrible minus - usage of direct addresses and later complaints
along the lines of 'my program doesn't work any more'.
The 16 and 256 colour modes have one principal drawback - the registers
used to invoke them are write-only and threfore it's tricky to restore
the settings when the 'dirty' job is exited either by RJOB or just a
simple CTRL-C.
How about we vote on this? But if it does become common knowledge, don't
come back whining that the addresses and whatnot have changed.

> BTW. when Goldfire is released, will it be possible to extend the
> resolutions to the full amout for 256 mode, or is that fixed in hardware as
> well? (After all, Goldfire is going to have the capability of more than
> enough RAM to handle it).

The RAM used for the display is accessed in a special way (and is of
different kind than 'ordinary' RAM) so it has to be located where the
video generating circuits are. As it is, the amount of RAM on Aurora is
fixed, and is acually 256k. 16k at the end was 'sacrificed' in a
somewhat desperate attampt to free at least some IO addresses. This
proved to be a good idea with the later advent of the RomDisq.
With the added addressing capability of the GF, the best we can hope for
is a rather simple mod to extend the vertical resolution maximum in mode
16 and 256 to 512 pixels instead of the current 480. What can and will
be done is, make screen drawing faster. And, you can be certain the
address of the Aurora screen RAM will change when the GF appears.

The GF does, however have a HUGE IO area with sufficient throughput for
boards with larger resolutions and more colours, but those would have to
be new boards, evolved from Aurora circuits. I do confess being a
nitpick, so most of the logic is alredy implemented in the chips on the
Aurora. It would be relatively easy to design a 32-bit Aurora with 1-2 M
screen RAM on board and 1024x768 hi-colour support or something similar.
I do have great plans but it remains to be seen weather there will be
any interest in them. Unfortunately, as it is, money is short, but more
importantly, time is even shorter.

Nasta


From: ZN <zeljko.nastasic@zg.tel.hr>
Subject: Re: [ql-users] aurora 16 and 256

Jerome Grimbert wrote:
> 
> Ok, maybe only Nasta can answer the following, but I think the answer may be of
> interest to others:
> 
> Nasta wrote:
> > Mode 16 and 256 have a fixed set of 16 and 256 colours,
> 
> Can you provide us with the RGB specification of each colour,
> or the rule over the set ?
> (or whatever you use, if for instance you do not have a RGB specification).
> 
Well, it seems I'll have to tempt someone a bit more :-)
Sorry for the length of this, but here goes:

********
The bit layout in mode 16 and 256 is as follows:

Mode 16:
Bit:		7	6	5	4	3	2	1	0
Pixel info:	Gl	Rl	Bl	Il	Gr	Rr	Br	Ir

G = green, R = red, B = blue, I = intensity, l = left pixel in pair, r =
right pixel in pair

The color component values generated are:
GRBI	G[0,7]	R[0,7]	B[0,7]
0000	0	0	0	Black
0001	1	1	1	Dark gray
0010	0	0	4	Dark blue
0011	0	0	7	Blue
0100	0	4	0	Dark red
0101	0	7	0	Red
0110	0	4	4	Dark magenta
0111	0	7	7	Magenta
1000	4	0	0	Dark green
1001	7	0	0	Green
1010	4	0	4	Dark cyan
1011	7	0	7	Cyan
1100	4	4	0	Dark yellow
1101	7	7	0	Yellow
1110	4	4	4	Gray
1111	7	7	7	White

Mode 256:

Bit:		7	6	5	4	3	2	1	0
Pixel info:	G2	R2	B2	G1	R1	B1	G0	RB0

G2,1,0 = green, G2=MSB, G0=LSB
R2,1 = red, R2=MSB
B2,1 = blue, B2=MSB
RB0 = Red/Blue compound bit 0.

G2..1 translate directly into a 3-bit value for the green component
R2..1 and B2..1 translate directly into the top 2 bits of the 3-bit red
and blue components.
RB0 generates, in conjunction with R2..1 the LSB of red, R0, and in
conjunction with B2..1 the LSB or blue, B0, as follows:

R0 = R[2] * RB[0]
   + R[1] * RB[0]
   + /R[2] * /R[1] * /B[2] * /B[1] * RB[0]

B0 = B[2] * RB[0]
   + B[1] * RB[0]

Therefore:

R2 R1 B2 B1 RB0	| R[0,7]B[0,7]
0  0  0  0  0	| 0	0
0  0  0  0  1	| 1 	0
0  0  0  1  0	| 0	2
0  0  0  1  1	| 0	3
0  0  1  0  0	| 0	4
0  0  1  0  1	| 0	5
0  0  1  1  0	| 0	6
0  0  1  1  1	| 0	7
0  1  0  0  0	| 2	0
0  1  0  0  1	| 3	0
0  1  0  1  0	| 2	2
0  1  0  1  1	| 3	3
0  1  1  0  0	| 2	4
0  1  1  0  1	| 3	5
0  1  1  1  0	| 2	6
0  1  1  1  1	| 3	7
1  0  0  0  0	| 4	0
1  0  0  0  1	| 5	0
1  0  0  1  0	| 4	2
1  0  0  1  1	| 5	3
1  0  1  0  0	| 4	4
1  0  1  0  1	| 5	5
1  0  1  1  0	| 4	6
1  0  1  1  1	| 5	7
1  1  0  0  0	| 6	0
1  1  0  0  1	| 7	0
1  1  0  1  0	| 6	2
1  1  0  1  1	| 7	3
1  1  1  0  0	| 6 	4
1  1  1  0  1	| 7	5
1  1  1  1  0	| 6	6
1  1  1  1  1	| 7	7

This color model has been chosen over one with three bits of green and
red and two bits of blue because the discrete colors reproduced cover a
more uniform area out of the standard color triangle.

The G[0,7] R[0,7] and B[0,7] notation signifies the colour components
coming out of the Aurora can assume one of 8 values, 0 being the lowest
(black level) and 7 being the highest (peak level). Internally, Aurora
converts everything into 3 bits per primary colour. The old mode 0 and 8
can only produce either 0 or 7 at the RGB outputs.
********

Nasta

PS, yes I will be expecting the 'Oh, just release the specs and lets get
over it' cries.

