OSDev.org

The Place to Start for Operating System Developers
It is currently Wed May 01, 2024 7:50 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 237 posts ]  Go to page Previous  1 ... 9, 10, 11, 12, 13, 14, 15, 16  Next
Author Message
 Post subject: Re:Your OS design
PostPosted: Mon Dec 18, 2006 8:40 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
MessiahAndrw wrote:
A 3D console? You could try two methods:
a) Instead of splitting the screen into X and Y, split it into X, Y, and Z.
or
b) When the screen scrolls, the top line could move back into the monitor, and then you can watch the letters go further back, then loop down, and wrap up like a scroll of paper.


I didn't intend to create a 3D console, what i meant was that i'm going to do some console interface first.
Nevertheless the basics for the interface are getting some shape - I use a viewport based mechanism to subdivide the screen in parts, each of which has both a text and graphical control. (Right now the graphical control is an emulation giving 80x50x3 graphics on text mode display using good ol' box drawing characters 8) - Try Nibbles if you want to know what i mean).

Besides, the original post was 6 months old, so you're late with suggestions anyway :P

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject:
PostPosted: Mon Dec 18, 2006 9:10 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 4:39 am
Posts: 52
Volumetric user interface is really useless concept unless you have bluebrints of star trek holodeck quality holoanimation equipment on your desk.

I have an idea: take characteristics of terminal and make a graphical user interface which has these charasteristics!

I'm not sure which all characteristics the terminal actually has which I love.

I start the list of characteristics of the terminal which people love:
    * deterministic, I know what happens when I do something.
    * fast to use, only thing which holds me back is how fast can I type and how many commands I remember.
    * modeless at least inside itself, my console is modeless mostly when I can type something on it. At least without vi -plugin. ;) There are exceptions like interpreters, thought. It happens mostly like this.
    * I always know what I do and what I don't know when I'm using terminal.
    * Terminal makes an history about things I've done and allows me to interactively search that history.
    * Some terminals allows me to undo actions as well, if tools are written so that it is possible.
    * Terminal gives me instant tools for learning how I can find how to use some tool.
    * Given extra meanings with colors are useful.


Go ahead and add stuff to this list yourself! I can then combine them and we are advanced further in user interface development because we know better what we want from user interface. :)

_________________
Windows Vista rapes you, cuts you and pisses inside. Thought these are just nifty side-effects.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Dec 19, 2006 10:02 am 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 11:33 pm
Posts: 3882
Location: Eindhoven
Cheery wrote:
I have an idea: take characteristics of terminal and make a graphical user interface which has these charasteristics!

I'm not sure which all characteristics the terminal actually has which I love.

I start the list of characteristics of the terminal which people love:
    * deterministic, I know what happens when I do something.
    * fast to use, only thing which holds me back is how fast can I type and how many commands I remember.
    * modeless at least inside itself, my console is modeless mostly when I can type something on it. At least without vi -plugin. ;) There are exceptions like interpreters, thought. It happens mostly like this.
    * I always know what I do and what I don't know when I'm using terminal.
    * Terminal makes an history about things I've done and allows me to interactively search that history.
    * Some terminals allows me to undo actions as well, if tools are written so that it is possible.
    * Terminal gives me instant tools for learning how I can find how to use some tool.
    * Given extra meanings with colors are useful.

Go ahead and add stuff to this list yourself! I can then combine them and we are advanced further in user interface development because we know better what we want from user interface. :)


Speed and "ease of use" commonly follow from deterministic use. "user-friendlyness" follows from being able to derive all knowledge from information that is onscreen. "ease of use" also follows from predictable locations - you know where your console line is going to appear so that's a trivial one for terminals, but in UI's is very important to define fixed locations for an "approve-style" button, "don't do anything" button and so on. If you know that all dialog boxes have their OK button at the lower-right hand corner and their cancel button at the lower left hand corner you only need to find the dialog to click on the right button - you don't have to find the button or read the text on it.

I require a user interface to be deterministic, both in what it's going to do and what it shows me to use. It should always show me current status or nothing, not a lagging status from 1/10th of a second ago. It should always respond the same to button presses or stuff like that, when I press shift-del newline it should always delete the file (for windows xp this one), not give a dialog and open the file simultaneously when I'm too quick. Just the thought of that makes me slow down tremendously in Windows when I'm working with stuff that I do not want opened - eg, viruses for example.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 04, 2007 4:02 pm 
Offline
Member
Member

Joined: Fri Dec 22, 2006 5:32 pm
Posts: 60
Location: Somewhere Down...
Right now, I'm not sure what I want, but I think about a math module in my future operating system. For example, calculating very big numbers (200 digit number is BIIIIIG number). Using fast code (tries to work primly with registers and not with memory addresses)..and something like that...
But I've no ideas about the implementation of this stuff right here!

_________________
I think, I have problems with Bochs. The biggest one: Bochs hates me!


Top
 Profile  
 
 Post subject:
PostPosted: Tue Jan 09, 2007 3:21 am 
Offline
Member
Member

Joined: Mon Jan 08, 2007 3:19 am
Posts: 30
Location: UK
My design...

My kernel (whose development is currently on hold) is based around the concept of being able to load and run trusted code that has been compiled by a bytecode->native compiler that cannot produce code that is not both type- and memory-safe (which I'm currently working on). This means that while the code for a process runs on the bare processor in PL0, with the memory of all processes and the kernel mapped in, it still cannot access memory that it does not own.

This concept has been used before, I know, in projects like JX and Microsoft's Singularity (and somebody upthread mentioned they were working on something similar too). What sets my project apart from these is:

* My virtual machine architecture is LLVM, which works as a GCC backend. It can therefore support C, C++, Ada, Java and Fortran as input languages. There are also compilers for a few other languages that LLVM users are working on.
* I don't enforce the use of garbage collection, but instead rely on never reallocating a freed page to an object of a different type/owner process to the original content of the page, unless static code analysis proves that the pointer must be dead (e.g. it was allocated in a function that has finished executing, and never left the stack frame of that function). Optionally, GC may be used to enable page reallocation. This means that unlike the previous projects, this one may actually be useful for real-time use.

Because of this, my kernel architecture is pretty straightforward:

* Single memory space. Almost zero context switching costs. This means that the traditional performance penalties for using a microkernel do not exist, so a microkernel is the obvious architectural choice.
* Not reusing pages is heavy on address space usage. This means that a 32 bit environment may be limiting to long-running applications that use a lot of memory allocations. 64-bit is necessary.
* All applications must be compiled (on first use) from bytecode. This means the system is in charge of address choices in the binaries: everything will be prelinked on disk, and just needs to be memory mapped. This should make application startup pretty quick.

There are a few issues still to resolve:

* The user must be prevented from directly installing binary programs (or, at least, it must be made difficult for him to do so), as these could break the kernel. How this should work, I'm not sure.
* How LLVM should be modified to prevent unsafe type conversions is unclear right now.

This is a difficult project, but when it's done, I think it's going to be something special.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 18, 2007 4:07 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7614
Location: Germany
INF1n1t wrote:
Right now, I'm not sure what I want, but I think about a math module in my future operating system. For example, calculating very big numbers (200 digit number is BIIIIIG number). Using fast code (tries to work primly with registers and not with memory addresses)..and something like that...


Most if not all (mainstream) programming languages have math libs and bignum support of their own, so you would effectively do duplicate work.

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 18, 2007 12:41 pm 
Offline
Member
Member

Joined: Sun Oct 24, 2004 11:00 pm
Posts: 46
Jules wrote:
There are a few issues still to resolve:

* The user must be prevented from directly installing binary programs (or, at least, it must be made difficult for him to do so), as these could break the kernel. How this should work, I'm not sure.
* How LLVM should be modified to prevent unsafe type conversions is unclear right now.

This is a difficult project, but when it's done, I think it's going to be something special.


I was thinking about something similiar =). However the kernel I was planning to design would be something like a microkernel, except there is no actual kernel code for message-passing, context changing, etc (as you said above it's not necessary). So basicly the kernel would be lot's of modules that share a common interface for communication. Since the kernel is just a bunch of (trusted) modules loaded into memory, I named it's type as noxkernel (no execute kernel), however that's just brainstorm sugar ;)

Instead of using LLVM, I am designing my own compiler infrastructure. It's based on an intermediate language (which I haven't yet decided, but I'm leaning towards an single-static assignment based language, because it allows some easy optimizations and register mapping). The compiler is my current pet-project. It's a meta-language which can modify it's syntax to match any other language. That's it's high-level view. I'm currently playing on different implementations, and the simplest I've found is a self-feeding Thue based string replacement language.

The modules would be organized in a way similiar to how Unix handles everything. They would be listed like a file-system. The module interface would be simply a list of pointers (where the first three are to functions: first one must be a message posting function, the second must be to stream in data, the third to stream out; and the last pointer is to it's childs, being that the first child must be list of custom functions).

I think I discussed some of this inside the V2OS forums. They are using a model similiar to COM for their modules.

To the problems quoted above, the binary data must be protected from user-access. You could make a capability based system, where the only thing that can access the binary data is the super user or the compiler framework. Perhaps it would be safer to put it in a different partition, or maybe the user can opt to not cache any binary (which would make things slow, but safe).

Hope this helps,

JVFF


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 18, 2007 7:29 pm 
Offline
Member
Member

Joined: Sun Jan 14, 2007 9:15 pm
Posts: 2566
Location: Sydney, Australia (I come from a land down under!)
Solar wrote:
Most if not all (mainstream) programming languages have math libs and bignum support of their own, so you would effectively do duplicate work.


Yes, but you run into trouble if they don't work properly or if they need some other service that your OS doesn't have. But that's what testing is for.

_________________
Pedigree | GitHub | Twitter | LinkedIn


Top
 Profile  
 
 Post subject:
PostPosted: Sat Jan 27, 2007 5:53 pm 
Offline
User avatar

Joined: Sat Jan 27, 2007 4:21 pm
Posts: 2
Location: Stockholm, Sweden
Just registered with OSDev.org, saw this thread/topic and thought I'd introduce myself and my OS project.

First off, I'm not aiming for my project to take off like Linus Torvald's project did, though I'll have to admit that it would be fun if it did :)

For me, it's more about learning OS Design the same way I learnt about programming C, HTML and x86 assembler (and now learning PHP), which is by myself through own experiences (takes a little longer but I find it easier to really understand things that way).

The platform I'm writing for is Intel's x86 series (IA32 as I recall) and having read a little about the types of kernels I'd say it wil be a monolithic one, with a design very similar to the hardware design of a PC, i.e. a kernel "motherboard", specifically designed for the machine you're running it on (which in my case means that the "motherboard" will be designed around the motheboard feautures of my development machine, a PS/1 with a 25 MHz 486 SX (old, I know, but I know its hardware the best).

Then you add the modules you need and want, and I'm not just meaning what you expect, like the drivers for cards you plug in, or the drivers for
network protocols (like TCP/IP) and filesystems, but things like the scheduler or security and memory management as well.

That is, if the scheduler doesn't perform optimally in the end application, you can let the "motherboard" load another scheduler which does the job better (with some requirements on the design to give applications not specifically written for it a reasonably predictable speed of execution, of course).

As to the design of the "motherboard" OS, it will be 32-bit, paged and segmented (that is, uses 48-bit pointers in the default mode). I've studied it a bit, and found that the added degree of complexity has its rewards when you run shared DLL's (the processes don't run out of virtual memory space quite as easily when you have a lot of different DLL's for different processes).

It will also use threads (more on the specs later on as I decide on exactly how to pull it off). Generally, you could call it a cross between OS/2 and UN*X, hopefully combining the best parts of the two. I do most of my programming in assembly (using NASM) and C (using OpenWatcom), and my documentation in HTML/PHP.

So any thoughts? Ideas? Has this approach been tried before?

/Teo, Newbie on here on OSDev.org.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Jan 27, 2007 7:13 pm 
Offline
Member
Member
User avatar

Joined: Tue Oct 17, 2006 9:29 pm
Posts: 2426
Location: Canada
I have a IBM PS/1 with a 25 MHz 486 SX laying around also.. Really fun machine.. Mines lacking a math-co-processor though.. Was available as an addon I think.

(I was bored..)
Code:
if(cool == 1) { /* Modification's to this classic IBM PC */
   printf("1. I Managed to add two 16mb 72-pin IBM SIMM modules.. Detects it alright\n");
} else { /* Limitations */
   printf("2. The BIOS can only detect hard drives smaller then 520mb... roughly.\n")
   printf("3. You can't boot off CD-ROM's, Even with Smart Boot Manager..\n");
}


Sound's like a neat goal though.. :wink:

_________________
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.


Last edited by Brynet-Inc on Thu Feb 08, 2007 9:20 pm, edited 1 time in total.

Top
 Profile  
 
 Post subject:
PostPosted: Thu Feb 08, 2007 8:37 pm 
Offline

Joined: Fri Jan 19, 2007 7:28 pm
Posts: 9
Location: On, Canada
I'm in the planing stages right now, My OS is a hobby and may very well never be finished. My motivation is that after 11 years of hobby programming (90% of that being with C++), I find myself using way too much managed code (C# and VB mostly) at work and school. So I decided to start with OS development. I have played with freestanding code about five years ago so I at least have a little bit of experience with that.

My first goal is too build an ultra simple OS that has a batch style process manager and single partition memory model. After that I will start adding better modules like a fixed partition memory model and round robin process manager until I get to the point of implementing a segmented/paged memory model (Maybe something else, not sure.) with load balancing multiprocessor manager. Along the way I will also implement File system modules and simple drivers.

The idea is to stick to standards as much as possible, like UDI and Posix (I'm still not sure about UDI though). Since I am an OO guy the whole thing will be written in C++ and ASM. So the whole thing will be driven by interfaces (Pure virtual classes).

This is just a high level intro, I actually have quite a bit on paper and should be ready to code something in a couple of months, so watch for that.


Top
 Profile  
 
 Post subject: I think my filesystem is interesting ...
PostPosted: Sun Feb 11, 2007 2:07 pm 
Offline
Member
Member
User avatar

Joined: Wed Feb 07, 2007 1:45 pm
Posts: 1401
Location: Eugene, OR, US
My intention with my OS is to create a replacement for Windoze, for my own use -- and then for as many other people who might like to use it, too.
So, it will be primarily designed as a user desktop system. Preliminary name might be the Bang!OS.

I'm trying to incorporate as many good ideas from existant OSes, while throwing out all the stupid things. My design priority is efficiency. That means that the OS must not waste disk space with overhead or slack, must not waste memory with system overhead, and must be really fast. Security is the second priority.

Intel has won the chip wars for 32 bit CPUs, so I am going to be designing specifically for the P6 family for now. When the 64 bit chips become more standardized, I will port to them, also. But I am not designing for portability. In fact, the kernel right now is 160K of NASM assembler, that compiles to 14K of machine language.

The Native Filesystem:
Inodes add one extra layer of somewhat unneccesary system overhead to a filesystem. They do create some extra functionality, but I question the actual usefulness of that functionality. So I chose not to use inodes on the native filesystem. Other filesystems are supported by the VFS manager.

I also made a change in standard cluster methods that I think is a huge improvement -- variable sized clusters. Basically, my filesystem is a FAT system. But one nibble of the 32 bit cluster determines the size of the cluster. If the nibble is 0, the cluster is one disk sector (512 bytes). If the nibble is 15, then the cluster is 16MB. The nibble encodes a power of 2 "shift" in the size. With one additional "artificial sector size shift" encoded in the filesystem's "format byte", this means that I can efficiently allocate and access about 500 Petabytes on a single partition. With the standard 4 partitions per physical disk.

A file is allocated to the different sized clusters in an efficient way. A file that is exactly 1MB -1 bytes long will be allocated to a 512K cluster, followed by a 256K, 128K, 64K, 32K, 16K, 8K, 4K, 2K, 1K, then 512b cluster. The 511 byte "tail" of the file is stored in the directory entry for the file. This results in 100% disk utilization ... and file fragmentation is theoretically and practically IMPOSSIBLE.

Kernel design:
Microsquish has reportedly found that most system crashes are the result of badly written 3rd party device drivers trampling on the kernel's memory area. So I want to implement a 3 ring privilege level scheme, with a small trusted kernel in ring 0, and most interrupt/exception handlers, device drivers, and system utilities in ring 1. User processes run in ring 3, of course.

Communication between pieces of code is handled mostly using a shared memory scheme, for speed purposes, but also with queued messages. An application can have one thread handle all incoming messages, or can set up individual threads to catch messages from specific sources.

The Virtual Memory Manager doles out 64K pages of memory that applications can subdivide with malloc if they choose. One clever thing here -- before performing page swaps to disk, the VMM checks to see if some pages can be compressed in memory, instead (if they are filled with zeroes, for example). Memory compression/decompression is always faster than a pair of disk accesses.

Access privileges are similar to UNIX, except without "group" permissions, which I always found an almost worthless concept. The levels are "owner", "local users" (anyone logged DIRECTLY into the machine), and "world" -- meaning anyone at the other end of any network.

All network access is performed through a sandbox. The user's application that is accessing the network has its permissions downgraded to sandbox level. No incoming data can be initially copied to any location other than the sandbox. It requires positive action on the part of the user to copy anything out of the sandbox to their directory space. The sandbox area is extremely temporary, and anything there is wiped every few days. JAVA apps, or any other process copied into and running from the sandbox, have such severely limited priviliges that they don't have any opportunity to be harmful or irritating.

As a Kernighan & Ritchie old-timer C programmer, I totally reject object oriented methods. OOP is incredibly inefficient. The GUI manager will be the thinnest OOP veneer over non-OOP coding.

-- I admit, I AM liking the sound of the recent posts about queueing up system calls, and having the kernel batch them all at once, the next time the "kernel system call handler" gets scheduled for a timeslice.


Top
 Profile  
 
 Post subject: Re: I think my filesystem is interesting ...
PostPosted: Mon Feb 19, 2007 8:26 am 
Offline
Member
Member
User avatar

Joined: Thu Nov 16, 2006 12:01 pm
Posts: 7614
Location: Germany
bewing wrote:
As a Kernighan & Ritchie old-timer C programmer, I totally reject object oriented methods. OOP is incredibly inefficient.


First sentence is OK, second one is... must... not... fall... for... flamebait...

Quote:
-- I admit, I AM liking the sound of the recent posts about queueing up system calls, and having the kernel batch them all at once, the next time the "kernel system call handler" gets scheduled for a timeslice.


I couldn't find the post you're referring to, but queueing messages in crucial points is one of the things that made Win3.11 GUI such a pain...

_________________
Every good solution is obvious once you've found it.


Top
 Profile  
 
 Post subject: Re: I think my filesystem is interesting ...
PostPosted: Mon Feb 19, 2007 9:23 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi

bewing wrote:
Intel has won the chip wars for 32 bit CPUs, so I am going to be designing specifically for the P6 family for now. When the 64 bit chips become more standardized, I will port to them, also. But I am not designing for portability. In fact, the kernel right now is 160K of NASM assembler, that compiles to 14K of machine language.


Because 64-bit is "new", there's much less backward compatability mess to worry about. 64-bit chips are more standardized now than 32-bit CPUs have been for years, and 64-bit chips will probably become less standardized over time (not more standardized) as different manufacturers add their own new features, like VMX (Intel's virtualization) or SVM (AMD's virtualization).

BTW, how long is it going to take to write your OS and what sort of computers will be around when it's finished? I'm thinking that in 10 years time "many-CPU" NUMA machines will be common, and 32-bit CPUs will be obsolete (except for embedded systems).

bewing wrote:
A file is allocated to the different sized clusters in an efficient way. A file that is exactly 1MB -1 bytes long will be allocated to a 512K cluster, followed by a 256K, 128K, 64K, 32K, 16K, 8K, 4K, 2K, 1K, then 512b cluster. The 511 byte "tail" of the file is stored in the directory entry for the file. This results in 100% disk utilization ... and file fragmentation is theoretically and practically IMPOSSIBLE.


What happens when you've got a 30 MB file and an application appends 2 KB to the end of it? Will you store the extra 2 KB somewhere else on disk (fragment the file), or relocate the entire file somewhere else so that you can add that extra 2 KB to the end without overwriting other data and without fragmenting the file? How much time would it cost to relocate (read and write) 30 MB of data, and how much time would it cost if the file was fragmented?

File fragmentation is bad, but is "no file fragmentation" worse (considering that it's a general file system, not something primarily designed for "write once")?

bewing wrote:
Microsquish has reportedly found that most system crashes are the result of badly written 3rd party device drivers trampling on the kernel's memory area. So I want to implement a 3 ring privilege level scheme, with a small trusted kernel in ring 0, and most interrupt/exception handlers, device drivers, and system utilities in ring 1. User processes run in ring 3, of course.


How will you prevent ring 1 code from trampling on the kernel's memory area? With paging protection there's only "supervisor" and "user" (where supervisor includes ring 2, ring 1 and ring 0).

If you do prevent ring 1 code from trampling on the kernel's memory area, does that mean that ring 1 code will trample on other ring 1 code instead?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Feb 24, 2007 2:22 am 
Offline
Member
Member
User avatar

Joined: Mon Jun 05, 2006 11:00 pm
Posts: 2293
Location: USA (and Australia)
I've had some inspiration to switch my kernel from being a monolithic kernel to a microkernel. The source of my inspiration was a line from the Minix 3 website:
Quote:
If the driver dies or fails to respond correctly to pings, the reincarnation server automatically replaces it by a fresh copy.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 237 posts ]  Go to page Previous  1 ... 9, 10, 11, 12, 13, 14, 15, 16  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot] and 21 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group