Discussion:
Home server decision
(too old to reply)
Rich Jordan
2011-09-08 17:26:40 UTC
Permalink
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives. Console terminal is usually off. Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.

My wife was ecstatic over our power bill; it was down over 40%. I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did. It wasn't reduced A/C
usage either since it was a hot month here. Now she wants to know if
I really _really_ need to keep running a server at home, or at least
one that hungry.

Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question. I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts. Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers). Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...

Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
JF Mezei
2011-09-08 18:16:06 UTC
Permalink
Post by Rich Jordan
I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
Since Apple abruptly ended production/support of Xserve, the future of
"Os-X Server" is unclear and os more likely to veer towards serving your
itunes/iphoto catalogue to TVs in the house (media server).

I am stuck with OS-X server now but I hae learned my lesson: proprietary
OS may provide easier upgrades/patches, but they are at the whim of the
vendor.

Larry Ellison and Steve Job are highly "emotional" men and could pull
the rug from any product because they feel like it (aka: change of
strategic direction).

Linux on the other hand, is at the whim of the community and not one
person can decide to end Linux or steer it in a different direction.

So in the long term, Linux is moe likely to surviuve as a server
operating systen to run server stuff, while Apple,s products are going
towards consumer goods.

Modern servers consume far less power, especially with the CPUs that
"shut down" when not needed. And as such they also generate less heat.

Note that the Mac Mini isn't really a "server", as it lacks proper
server stuff like ability to remotely power on/off etc. (IPMI). I have
my doubts about whether it is truly designed for 7/24 operations over
long term due to the small enclosure and lack of fans. (or does it have
a fan now ?).
Rich Jordan
2011-09-08 18:30:55 UTC
Permalink
Post by JF Mezei
Post by Rich Jordan
I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
Since Apple abruptly ended production/support of Xserve, the future of
"Os-X Server" is unclear and os more likely to veer towards serving your
itunes/iphoto catalogue to TVs in the house (media server).
I am stuck with OS-X server now but I hae learned my lesson: proprietary
OS may provide easier upgrades/patches, but they are at the whim of the
vendor.
Larry Ellison and Steve Job are highly "emotional" men and could pull
the rug from any product because they feel like it (aka: change of
strategic direction).
Linux on the other hand, is at the whim of the community and not one
person can decide to end Linux or steer it in a different direction.
So in the long term, Linux is moe likely to surviuve as a server
operating systen to run server stuff, while Apple,s products are going
towards consumer goods.
Modern servers consume far less power, especially with the CPUs that
"shut down" when not needed. And as such they also generate less heat.
Note that the Mac Mini isn't really a "server", as it lacks proper
server stuff like ability to remotely power on/off etc. (IPMI). I have
my doubts about whether it is truly designed for 7/24 operations over
long term due to the small enclosure and lack of fans. (or does it have
a fan now ?).
Well, it would be 24x7 but I don't otherwise need 'server grade'
hardware for the load I'm talking about. Definitely don't want the
heat or noise of a server class box either. Linux is the more likely
solution for cost reasons (our desktops are Macs, plus my one PWS600au
which no longer gets daily usage due to browser limitations (too many
sites _require_ that awful flash crap, including my bank, which has
almost driven us to leave it)).

Communigate Pro (demo or community edition). Apache. PHP/Python.
MySQL. FTP server for internal use (netcam repository). Maybe SSH
for other uses. It would be nice if it could handle an LVD SCSI port
so I could continue using my DLT drive for backups (not the Mac Mini
obviously). Sucks to be on Unix but a lot less than it would suck to
be doing windows at home. iLO or other server perks would be cool but
not needed, especially if they only come with big size, big heat, big
noise, and/or big power hunger; no reason to replace the Alpha if that
is the case.

Or go cold turkey at home and start buying parts and paying for engine
machine work for the Mopar finally. Been a long time since I could
drive it and I miss it.
Mazzini Alessandro
2011-09-08 19:13:03 UTC
Permalink
Given that around the past 10 months a lot of these have been retired as out
of lease (so finding it dirty cheap is doable), why not looking for an HP
C8000 ? It's not consuming a lot, it's very, very quiet, can be found in
rack variant, uses the latest pa-risc hp made and has all the perks of a
parisc server (latest models) with just iLO lacking.
4 internal scsi bays ( u320 ) with 2 channels, 3 ide channels for 2 optical
units and 1hd, 1 agp, 2 pci 32, 4 (if I'm not mistaken) pci-x up to 133mhz,
usb (if you install hp-ux v2 or v3 the usb is not limited to just kb and
mouse, but you loses the agp support), up to 32gb of ram, up to 2 dual core
cpus.
And even if it's a workstation, it's server grade...

(and can obviously work headless without the videocard, to save more
energy...)
Post by JF Mezei
Post by Rich Jordan
I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
Since Apple abruptly ended production/support of Xserve, the future of
"Os-X Server" is unclear and os more likely to veer towards serving your
itunes/iphoto catalogue to TVs in the house (media server).
I am stuck with OS-X server now but I hae learned my lesson: proprietary
OS may provide easier upgrades/patches, but they are at the whim of the
vendor.
Larry Ellison and Steve Job are highly "emotional" men and could pull
the rug from any product because they feel like it (aka: change of
strategic direction).
Linux on the other hand, is at the whim of the community and not one
person can decide to end Linux or steer it in a different direction.
So in the long term, Linux is moe likely to surviuve as a server
operating systen to run server stuff, while Apple,s products are going
towards consumer goods.
Modern servers consume far less power, especially with the CPUs that
"shut down" when not needed. And as such they also generate less heat.
Note that the Mac Mini isn't really a "server", as it lacks proper
server stuff like ability to remotely power on/off etc. (IPMI). I have
my doubts about whether it is truly designed for 7/24 operations over
long term due to the small enclosure and lack of fans. (or does it have
a fan now ?).
Well, it would be 24x7 but I don't otherwise need 'server grade'
hardware for the load I'm talking about. Definitely don't want the
heat or noise of a server class box either. Linux is the more likely
solution for cost reasons (our desktops are Macs, plus my one PWS600au
which no longer gets daily usage due to browser limitations (too many
sites _require_ that awful flash crap, including my bank, which has
almost driven us to leave it)).

Communigate Pro (demo or community edition). Apache. PHP/Python.
MySQL. FTP server for internal use (netcam repository). Maybe SSH
for other uses. It would be nice if it could handle an LVD SCSI port
so I could continue using my DLT drive for backups (not the Mac Mini
obviously). Sucks to be on Unix but a lot less than it would suck to
be doing windows at home. iLO or other server perks would be cool but
not needed, especially if they only come with big size, big heat, big
noise, and/or big power hunger; no reason to replace the Alpha if that
is the case.

Or go cold turkey at home and start buying parts and paying for engine
machine work for the Mopar finally. Been a long time since I could
drive it and I miss it.
MG
2011-09-08 23:43:35 UTC
Permalink
Post by Mazzini Alessandro
Given that around the past 10 months a lot of these have been retired as out
of lease (so finding it dirty cheap is doable), why not looking for an HP
C8000 ? It's not consuming a lot, it's very, very quiet, can be found in
rack variant, uses the latest pa-risc hp made and has all the perks of a
parisc server (latest models) with just iLO lacking.
[...]
And even if it's a workstation, it's server grade...
(and can obviously work headless without the videocard, to save more
energy...)
Good suggestion, though, it still wouldn't be ideal: It'd still boil
down to a UNIX/-derivative for a VMS user.

Even better, in my opinion, would be to find a zx2000 or zx6000 with
one PSU, two or just one low-voltage processor (e.g. "Deerfield" at
1 GHz), a minimum of RAM (from 1 to 4 GB) and nothing else, except iLO
perhaps (which doesn't consume much). For a disk, one could even use
an SSD disk, together with a SAS/SATA controller (several LSI PCI-X
cards will work fine for that purpose).

Like that you can still run the excellent VMS and the electricity bill
won't be too high.

- MG
glen herrmannsfeldt
2011-09-08 23:56:13 UTC
Permalink
MG <***@spamxs4all.nl> wrote:

(snip)
Post by MG
Even better, in my opinion, would be to find a zx2000 or zx6000 with
one PSU, two or just one low-voltage processor (e.g. "Deerfield" at
1 GHz), a minimum of RAM (from 1 to 4 GB) and nothing else, except iLO
perhaps (which doesn't consume much). For a disk, one could even use
an SSD disk, together with a SAS/SATA controller (several LSI PCI-X
cards will work fine for that purpose).
I would be interested in SAS/SATA for my RX2600. Which controllers
will VMS recognize?

-- glen
MG
2011-09-09 01:23:25 UTC
Permalink
Post by glen herrmannsfeldt
I would be interested in SAS/SATA for my RX2600. Which controllers
will VMS recognize?
Several are supported, the ones that I know of (and tried) were of
LSI. I believe anything with an LSI 1068 chipset ought to work.
I tried the LSI SAS 3080X/-R in one of my rx2600s once. It worked
just fine and it was also properly recognized as a disk controller
device under VMS.

- MG
Rich Jordan
2011-09-09 14:37:04 UTC
Permalink
Post by MG
Post by Mazzini Alessandro
Given that around the past 10 months a lot of these have been retired as out
of lease (so finding it dirty cheap is doable), why not looking for an HP
C8000 ? It's not consuming a lot, it's very, very quiet, can be found in
rack variant, uses the latest pa-risc hp made and has all the perks of a
parisc server (latest models) with just iLO lacking.
[...]
And even if it's a workstation, it's server grade...
(and can obviously work headless without the videocard, to save more
energy...)
Good suggestion, though, it still wouldn't be ideal: It'd still boil
down to a UNIX/-derivative for a VMS user.
Even better, in my opinion, would be to find a zx2000 or zx6000 with
one PSU, two or just one low-voltage processor (e.g. "Deerfield" at
1 GHz), a minimum of RAM (from 1 to 4 GB) and nothing else, except iLO
perhaps (which doesn't consume much).  For a disk, one could even use
an SSD disk, together with a SAS/SATA controller (several LSI PCI-X
cards will work fine for that purpose).
Like that you can still run the excellent VMS and the electricity bill
won't be too high.
  - MG
I seriously doubt that would use less power than the DS10L and disks.
But perhaps worth a check. However this is a hobbyist arrangement;
cost is most definitely a factor. As is noise...
Mazzini Alessandro
2011-09-09 18:31:14 UTC
Permalink
Personally :

-the noise of a C8000 is way lower than any of the other normal computers i
have here (obviously is a subjective impression).
-the price, a bargain can be had easily for 7x euro, in europe, with a
single cpu, 2 or 4gb ram, 72gb hd and videocard (and can be obtained more
cheaply if having luck or if missing hd-ram-videocard). If interested in
maxed out (as number of cpu and speed... ) things are pretty different, as
price. Anyway the reasonable priced ones have generally a single 900mhz or
1ghz pa8800 (max is 1.1ghz pa8900 ... basically the 8900 consumes a bit less
and has 64mb of cache vs 32mb of the 8800).

The cons , I doubt that there's a sas-sata controller supported (unless
installing linux & hoping that some support was added)
Post by MG
Post by Mazzini Alessandro
Given that around the past 10 months a lot of these have been retired as out
of lease (so finding it dirty cheap is doable), why not looking for an HP
C8000 ? It's not consuming a lot, it's very, very quiet, can be found in
rack variant, uses the latest pa-risc hp made and has all the perks of a
parisc server (latest models) with just iLO lacking.
[...]
And even if it's a workstation, it's server grade...
(and can obviously work headless without the videocard, to save more
energy...)
Good suggestion, though, it still wouldn't be ideal: It'd still boil
down to a UNIX/-derivative for a VMS user.
Even better, in my opinion, would be to find a zx2000 or zx6000 with
one PSU, two or just one low-voltage processor (e.g. "Deerfield" at
1 GHz), a minimum of RAM (from 1 to 4 GB) and nothing else, except iLO
perhaps (which doesn't consume much). For a disk, one could even use
an SSD disk, together with a SAS/SATA controller (several LSI PCI-X
cards will work fine for that purpose).
Like that you can still run the excellent VMS and the electricity bill
won't be too high.
- MG
I seriously doubt that would use less power than the DS10L and disks.
But perhaps worth a check. However this is a hobbyist arrangement;
cost is most definitely a factor. As is noise...
Michael Kraemer
2011-09-09 19:14:13 UTC
Permalink
Post by Mazzini Alessandro
The cons , I doubt that there's a sas-sata controller supported (unless
installing linux & hoping that some support was added)
The problem with the C8000 is, that it runs exactly
one "supported" OS, a certain version of HP-UX 11.11,
which is already five years old by now.
Newer versions like 11.23 might run,
but you'd probably loose gfx, not such a good choice
for a dedicated workstation. Reports about 11.31
weren't too positive either. I never heard about
a fully working Linux. PA RISC is as dead as Alpha,
so I wouldn't hold my breath for more software
coming down the line.
But if one can live with all that,
a C8000 is a very serious home workstation, indeed.
Mazzini Alessandro
2011-09-09 19:59:29 UTC
Permalink
Actually I know one person that is experimenting to get gfx support under
v2. At the moment he has full 2D support and he's trying to get the 3D
module to work...
At the moment one drawback , unless he find a way to solve it, is that the
boot is all through the serial console.. I think the agp gets activated once
x is started from the serial console OR when the boot has completed.

v2 installs on a C8000 , and I think also V3 with proper tweaks to the
installation media (to skip the check for supported hw)... have to ask to
one person (the same who's now tweaking the graphic support under v2), my
memory is not so good unfortunately.
Unless somehow modding the gfx support, v2 (and v3) gives a plain console -
no x setup... but if the idea is to have a server, x is not exactly a big
loss....

Yep, what I read about linux was pretty messy, but heard a rumor that maybe
the latest binaries have a better support for hw.
Post by Michael Kraemer
Post by Mazzini Alessandro
The cons , I doubt that there's a sas-sata controller supported (unless
installing linux & hoping that some support was added)
The problem with the C8000 is, that it runs exactly
one "supported" OS, a certain version of HP-UX 11.11,
which is already five years old by now.
Newer versions like 11.23 might run,
but you'd probably loose gfx, not such a good choice
for a dedicated workstation. Reports about 11.31
weren't too positive either. I never heard about
a fully working Linux. PA RISC is as dead as Alpha,
so I wouldn't hold my breath for more software
coming down the line.
But if one can live with all that,
a C8000 is a very serious home workstation, indeed.
Michael Kraemer
2011-09-09 20:38:55 UTC
Permalink
Post by Mazzini Alessandro
Unless somehow modding the gfx support, v2 (and v3) gives a plain console -
no x setup... but if the idea is to have a server, x is not exactly a big
loss....
if the idea is to have a solid server,
it is not such a good idea to run a "tweaked" OS,
possibly beyond its specs. You might loose more
than just gfx. The only "allowed" version of HP-UX
to run on a C8000 is 11.11.
Post by Mazzini Alessandro
Yep, what I read about linux was pretty messy, but heard a rumor that maybe
the latest binaries have a better support for hw.
And I heard the opposite, about PA going to loose Linux support.
Mazzini Alessandro
2011-09-09 22:32:36 UTC
Permalink
I wonder about the "beyond its specs" part

the parisc server equivalents of the c8000 (and that has v3 support) use the
same chipset and the same cpu , the rp3440

http://www.openpa.net/systems/hp-9000_rp3410_rp3440.html
http://www.openpa.net/systems/hp_c8000.html
http://chrysalis.rutgers.edu/hardware/benchmarks.php

sure the workstation is blacklisted from installing v2 and v3, but that's
just to make sure to sell more expensive server

same chipset, same cpu , same 32gb max for the rp3440, more or less same
expansion slots (aside the agp), same scsi chip onboard, etc.

what's not the same : the c8000 has no hotswap bays, and no iLO

I really don't see problems related to specs, about using v2 or v3, The main
issue I see is fixing the whitelist every single time
Post by Michael Kraemer
Post by Mazzini Alessandro
Unless somehow modding the gfx support, v2 (and v3) gives a plain
console - no x setup... but if the idea is to have a server, x is not
exactly a big loss....
if the idea is to have a solid server,
it is not such a good idea to run a "tweaked" OS,
possibly beyond its specs. You might loose more
than just gfx. The only "allowed" version of HP-UX
to run on a C8000 is 11.11.
Post by Mazzini Alessandro
Yep, what I read about linux was pretty messy, but heard a rumor that
maybe the latest binaries have a better support for hw.
And I heard the opposite, about PA going to loose Linux support.
JF Mezei
2011-09-09 00:45:09 UTC
Permalink
Post by Rich Jordan
Sucks to be on Unix but a lot less than it would suck to
be doing windows at home.
At the end of the day, and I see this now, an "inferior" OS that lets
you build infrastructure/services which get easy upgrades and don't
require you do a migration every couple of years is far better than a
superior OS that gets to end of life, and you migrate to another good OS
that also gets to end of life (or whose direction changes intosomething
you don't want) etc.

I am not done migrating from VMS to OS-X and I already know that I'll
have to migrate from OS-X to Linux, so I try to avoid any Apple specific
middleware on OS-X so that it will ease my migration to Linux.

VMS did last me since 1989 at home, so it had a very good run. My vax
system disk dates from micro VMS 4.7 on a microvax II and evolved to 7.3
on various VAX platforms over the years.
Forster, Michael
2011-09-09 01:25:04 UTC
Permalink
MS Windows is just good enough for many purposes and is "supportable".
Post by JF Mezei
Post by Rich Jordan
Sucks to be on Unix but a lot less than it would suck to
be doing windows at home.
At the end of the day, and I see this now, an "inferior" OS that lets
you build infrastructure/services which get easy upgrades and don't
require you do a migration every couple of years is far better than a
superior OS that gets to end of life, and you migrate to another good OS
that also gets to end of life (or whose direction changes intosomething
you don't want) etc.
I am not done migrating from VMS to OS-X and I already know that I'll
have to migrate from OS-X to Linux, so I try to avoid any Apple specific
middleware on OS-X so that it will ease my migration to Linux.
VMS did last me since 1989 at home, so it had a very good run. My vax
system disk dates from micro VMS 4.7 on a microvax II and evolved to 7.3
on various VAX platforms over the years.
_______________________________________________
Info-vax mailing list
http://rbnsn.com/mailman/listinfo/info-vax_rbnsn.com
Forster, Michael
2011-09-09 01:25:04 UTC
Permalink
MS Windows is just good enough for many purposes and is "supportable".
Post by JF Mezei
Post by Rich Jordan
Sucks to be on Unix but a lot less than it would suck to
be doing windows at home.
At the end of the day, and I see this now, an "inferior" OS that lets
you build infrastructure/services which get easy upgrades and don't
require you do a migration every couple of years is far better than a
superior OS that gets to end of life, and you migrate to another good OS
that also gets to end of life (or whose direction changes intosomething
you don't want) etc.
I am not done migrating from VMS to OS-X and I already know that I'll
have to migrate from OS-X to Linux, so I try to avoid any Apple specific
middleware on OS-X so that it will ease my migration to Linux.
VMS did last me since 1989 at home, so it had a very good run. My vax
system disk dates from micro VMS 4.7 on a microvax II and evolved to 7.3
on various VAX platforms over the years.
_______________________________________________
Info-vax mailing list
http://rbnsn.com/mailman/listinfo/info-vax_rbnsn.com
JF Mezei
2011-09-09 02:57:25 UTC
Permalink
Post by Forster, Michael
MS Windows is just good enough for many purposes and is "supportable".
One of the problems with Windows and now OS-X is that the desktop OS is
mature but those companies need to keep adding bells and whistles to
sell new versions and those may not steer the respective server versions
in the right direction especially for Apple which has exited the
business/enterprise markets.


Linux is far less influenced by the need for hype and marketing bells
and whistles, and if company A packages Linux with unwanted bells and
whistles, you can usually remove them, or simply get your linux from
company B which may package it for server purposes.


With regards to VMS, until apotheker speaks about it and its furure
(along with NSK/ HP-UX and Itanium, we realy don't know what HP's true
intentiosn are.
Paul Sture
2011-09-10 13:05:30 UTC
Permalink
Post by JF Mezei
Post by Forster, Michael
MS Windows is just good enough for many purposes and is "supportable".
One of the problems with Windows and now OS-X is that the desktop OS is
mature but those companies need to keep adding bells and whistles to
sell new versions and those may not steer the respective server versions
in the right direction especially for Apple which has exited the
business/enterprise markets.
The server editions don't suffer from the bells and whistles in the same
way. Well, maybe I should say that bells and whistles are there in
server terms, useful server stuff rather than eye candy, but those
services are switched off by default.

Windows Home Server 2011 hasn't had a great take up, and is going cheap
at the moment (circa USD 50), but is apparently equivalent to the
Windows Server Essentials, and comes with 10 client licenses. I believe
the lack of take upp is down to MS dropping Drive Extender (I think I
read somewhere that it got a rewrite from scratch, and that project got
canned)

http://en.wikipedia.org/wiki/Windows_Home_Server#Drive_Extender
Post by JF Mezei
Linux is far less influenced by the need for hype and marketing bells
and whistles, and if company A packages Linux with unwanted bells and
whistles, you can usually remove them, or simply get your linux from
company B which may package it for server purposes.
I'm not convinced that applies to desktop Linux. There has been a lot
of resistance to the later versions of Gnome, KDE and now Ubuntu's Unity.

Linux Torvalds has started using XFCE:

http://digitizor.com/2011/08/04/linus-torvalds-ditches-gnome-for-xfce/
--
Paul Sture
Rich Jordan
2011-09-09 14:47:03 UTC
Permalink
Post by Forster, Michael
MS Windows is just good enough for many purposes and is "supportable".
Not my work experience. At least not supportable properly at
reasonable cost in time and effort.

Plus there's this quality of life thing. Its bad enough having to
deal with broken, compromised, inconsistent, unreliable windows boxes
at work. I'm not going to willingly do it at home, on my dime and my
time.

Unix is not a favored environment; I'm tired of the inconsistencies
between Linux distributions. MacOS is better, but as previously noted
Apple has seriously dropped the ball on server side, probably so they
can go full cloud and push their users the same way (sadly using
microsoft cloud infrastructure... we'll see how that works for them).

Understand I don't despise windows 'just because', or because I'm a
Mac (or VMS) fanboy... I despise it after having to use and support it
(week)daily for the last 14 years (and use it since Windows 1.03).
Mine is a well earned despite.
Hans Vlems
2011-09-09 06:38:46 UTC
Permalink
Post by Rich Jordan
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives.  Console terminal is usually off.  Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.
My wife was ecstatic over our power bill; it was down over 40%.  I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did.  It wasn't reduced A/C
usage either since it was a hot month here.  Now she wants to know if
I really _really_ need to keep running  a server at home, or at least
one that hungry.
Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question.  I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts.  Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers).  Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
Not a fun choice.  I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started.  It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
Not sure how your electricity bill is computed but something seems
wrong with your figures.
The VMS system plus drives uses 220 W, that is 5,3 kWhr every day, or
1850 kWhr annually.
That's not extremely high, the average here is 2500 kWhr per
household, and my home used 7300 kWhr
last year. Which got me started on power saving experiments too!
Anyway, I'd expect that an A/C unit and other kitchen appliances use
considerably more.
Hans
Rich Jordan
2011-09-09 14:35:33 UTC
Permalink
Post by Hans Vlems
Post by Rich Jordan
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives.  Console terminal is usually off.  Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.
My wife was ecstatic over our power bill; it was down over 40%.  I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did.  It wasn't reduced A/C
usage either since it was a hot month here.  Now she wants to know if
I really _really_ need to keep running  a server at home, or at least
one that hungry.
Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question.  I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts.  Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers).  Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
Not a fun choice.  I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started.  It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
Not sure how your electricity bill is computed but something seems
wrong with your figures.
The VMS system plus drives uses 220 W, that is 5,3 kWhr every day, or
1850 kWhr annually.
That's not extremely high, the average here is 2500 kWhr per
household, and my home used 7300 kWhr
last year. Which got me started on power saving experiments too!
Anyway, I'd expect that an A/C unit and other kitchen appliances use
considerably more.
Hans
Hans
yep; that's why I don't understand how we could get a 40% drop,
but nothing else changed. We had no extended power outages (there was
a 2 hour outage, but the nearly two day outage from a big storm was on
the previous bill). The day-to-day temps were slightly above average,
our automatic thermostat was not adjusted any differently, and we did
normal 'stuff' re: cooking, fridge use, etc. We even baked bread a
couple of times (gas oven but that does increase A/C usage). The only
'known' difference was the server shutdown.

We have done power saving changes but all implemented years ago. CFL
bulbs in noncritical areas (IE where I don't have to try and read by
them). Automatic sheduling thermostat. Heck, even moving to the
DS10L from the previous AS600 was a big jump; cut power usage in
half. But this one month bill has us stumped. But since the work in
the house is taking longer than anticipated, we'll get at least one
more bill without the server running; we'll see how that compares.
Mazzini Alessandro
2011-09-09 18:35:33 UTC
Permalink
Have you tried to measure the wattage using some tools ?
I still remember higher bills for a while, and some complains about being
the computers responsible for that... only to find out that it was a fridge
that had started to consume way over its declared values.
I wonder if the psu of the DS10L is not the culprit, sort of way absorbing
and wasting way more than its declared max, due to issues....
Post by Hans Vlems
Post by Rich Jordan
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives. Console terminal is usually off. Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.
My wife was ecstatic over our power bill; it was down over 40%. I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did. It wasn't reduced A/C
usage either since it was a hot month here. Now she wants to know if
I really _really_ need to keep running a server at home, or at least
one that hungry.
Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question. I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts. Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers). Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
Not sure how your electricity bill is computed but something seems
wrong with your figures.
The VMS system plus drives uses 220 W, that is 5,3 kWhr every day, or
1850 kWhr annually.
That's not extremely high, the average here is 2500 kWhr per
household, and my home used 7300 kWhr
last year. Which got me started on power saving experiments too!
Anyway, I'd expect that an A/C unit and other kitchen appliances use
considerably more.
Hans
Hans
yep; that's why I don't understand how we could get a 40% drop,
but nothing else changed. We had no extended power outages (there was
a 2 hour outage, but the nearly two day outage from a big storm was on
the previous bill). The day-to-day temps were slightly above average,
our automatic thermostat was not adjusted any differently, and we did
normal 'stuff' re: cooking, fridge use, etc. We even baked bread a
couple of times (gas oven but that does increase A/C usage). The only
'known' difference was the server shutdown.

We have done power saving changes but all implemented years ago. CFL
bulbs in noncritical areas (IE where I don't have to try and read by
them). Automatic sheduling thermostat. Heck, even moving to the
DS10L from the previous AS600 was a big jump; cut power usage in
half. But this one month bill has us stumped. But since the work in
the house is taking longer than anticipated, we'll get at least one
more bill without the server running; we'll see how that compares.
Rich Jordan
2011-09-09 20:01:01 UTC
Permalink
Post by Mazzini Alessandro
Have you tried to measure the wattage using some tools ?
I still remember higher bills for a while, and some complains about being
the computers responsible for that... only to find out that it was a fridge
that had started to consume way over its declared values.
I wonder if the psu of the DS10L is not the culprit, sort of way absorbing
and wasting way more than its declared max, due to issues....
Possible. It has been a while since I had the server and drives
connected through the kill-a-watt meter; the 220W is a recorded
reading from a year or more ago. I guess we could check and review
other constant or high power draw devices too (though I don't have
something that can measure the A/C draw I can have the thermostat
track its hours of operation, so that is something). Fridge used to
be 160 watts running, but I don't know its hours per day of operation
any more.

My wife's computer does power control, but we also have not measured
its usage since she replaced it last year, but nothing about it
changed in the last two months (that we know about). I'll start
correcting that this weekend.
JF Mezei
2011-09-10 06:48:43 UTC
Permalink
Post by Mazzini Alessandro
Have you tried to measure the wattage using some tools ?
I still remember higher bills for a while, and some complains about being
the computers responsible for that... only to find out that it was a fridge
that had started to consume way over its declared values.
I wonder if the psu of the DS10L is not the culprit, sort of way absorbing
and wasting way more than its declared max, due to issues....
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not being
reliable if the load isn't high enough.

I just purchased a power meter and will eventually plg the DS10L in and
reboot it to measure how much it consumes (especially when powered off
because the PSU is still "on" as is the RMU circuitry.
glen herrmannsfeldt
2011-09-10 07:03:13 UTC
Permalink
JF Mezei <***@vaxination.ca> wrote:
(snip)
Post by JF Mezei
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not being
reliable if the load isn't high enough.
The original IBM PC/AT had a resistor if you didn't have the
optional hard disk drive. (It fit on the drive power connector.)
The power supply had a minimum load.

I did once burn up a PC style (not IBM) power supply running
it without any load, so don't do that.

Does the DS10L need one even if you have disks?

-- glen
John Wallace
2011-09-10 08:54:55 UTC
Permalink
Post by glen herrmannsfeldt
(snip)
Post by JF Mezei
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not being
reliable if the load isn't high enough.
The original IBM PC/AT had a resistor if you didn't have the
optional hard disk drive.  (It fit on the drive power connector.)
The power supply had a minimum load.
I did once burn up a PC style (not IBM) power supply running
it without any load, so don't do that.
Does the DS10L need one even if you have disks?
-- glen
"Minimum load" problems are common with switched mode power supplies
(SMPS), and SMPS have been in favour for over two decades now. A BA23
(MicroPDP/MicroVAX) box+PSU would not work right if there wasn't
enough load e.g. if the only card in it was a KXT11 (Falcon) single
board computer, with no mass storage. That would be somewhat before
the original PC/AT.
Marc Schlensog
2011-09-11 18:14:35 UTC
Permalink
On Sat, 10 Sep 2011 02:48:43 -0400
Post by JF Mezei
Post by Mazzini Alessandro
Have you tried to measure the wattage using some tools ?
I still remember higher bills for a while, and some complains about
being the computers responsible for that... only to find out that
it was a fridge that had started to consume way over its declared
values. I wonder if the psu of the DS10L is not the culprit, sort
of way absorbing and wasting way more than its declared max, due to
issues....
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not
being reliable if the load isn't high enough.
I just purchased a power meter and will eventually plg the DS10L in
and reboot it to measure how much it consumes (especially when
powered off because the PSU is still "on" as is the RMU circuitry.
My DS10L (or was it the DS10?) consumes a little more than 20W when
turned off (for comparison: my DS20E sucks up 9W with 2 PSUs and 13W
with 3 PSUs).

That's why I take them off the grid when they're not used.

Marc
Hans Vlems
2011-09-13 07:18:53 UTC
Permalink
Post by Marc Schlensog
On Sat, 10 Sep 2011 02:48:43 -0400
Post by JF Mezei
Post by Mazzini Alessandro
Have you tried to measure the wattage using some tools ?
I still remember higher bills for a while, and some complains about
being the computers responsible for that... only to find out that
it was a fridge that had started to consume way over its declared
values. I wonder if the psu of the DS10L is not the culprit, sort
of way absorbing and wasting way more than its declared max, due to
issues....
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not
being reliable if the load isn't high enough.
I just purchased a power meter and will eventually plg the DS10L in
and reboot it to measure how much it consumes (especially when
powered off because the PSU is still "on" as is the RMU circuitry.
My DS10L (or was it the DS10?) consumes a little more than 20W when
turned off (for comparison: my DS20E sucks up 9W with 2 PSUs and 13W
with 3 PSUs).
That's why I take them off the grid when they're not used.
Marc- Hide quoted text -
- Show quoted text -
Ah, I know that feeling! Last year I bought a Brennenstuhl power
meter.
To my surprise a dual processor AlphaServer1200 draws 72W when
switched off.
The AS1200 has two powersupplies, 36W each. That's 6~8 times more than
a DS20E.
All my AS1200 and Digital Server 5305's (the white box variant) were
always connected to
mains power. I have 6 of them... Now all systems are plugged into
switched power strips and the
daily energy budget dropped from nearly 20 kWhr to 13 kWhr. One
kiloWatthour costs 23 eurocents
here in the Netherlands, so those power strips really were a good
investment!
Hans
Phillip Helbig---undress to reply
2011-09-21 20:00:28 UTC
Permalink
Post by JF Mezei
DS10Ls have a power consuming resistor in the front whose sole purpose
is to consume power and generae heat. Something about the PSU not being
reliable if the load isn't high enough.
Just the DS10L, right, not the DS10?
FrankS
2011-09-09 23:50:38 UTC
Permalink
Post by Rich Jordan
My wife was ecstatic over our power bill; it was down over 40%.  I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did.
Was the kwH usage down by 40%, or just the bottom line cost?

Just looking at my power bill. For the last two month period there
was a usage of 2750 kwH, or an average of 1375 kwH each month. For me
to drop 40% usage would mean a drop of 550 kwH per month.

If you had previously measured the power consumption of the DS10L at
220w, then over one month of running 24x7 that would come out to about
148kwH (220 * 24 * 7 * 4). As you can see, for my house that would be
just a bit over 10%.

You may have a much more efficient house than me, or you're not
running as many things 24x7 as I do. The point being that I'm
wondering if it's really a drop of 40% usage.

If the bill just went down by 40%, but the usage only went down 10%
then maybe the power company gave you additional credit due to the
storm outages. Or maybe they started rating you as residential
instead of commercial. :)
Paul Sture
2011-09-10 12:49:51 UTC
Permalink
In article
Post by FrankS
Post by Rich Jordan
My wife was ecstatic over our power bill; it was down over 40%.  I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did.
Was the kwH usage down by 40%, or just the bottom line cost?
Just looking at my power bill. For the last two month period there
was a usage of 2750 kwH, or an average of 1375 kwH each month. For me
to drop 40% usage would mean a drop of 550 kwH per month.
If you had previously measured the power consumption of the DS10L at
220w, then over one month of running 24x7 that would come out to about
148kwH (220 * 24 * 7 * 4). As you can see, for my house that would be
just a bit over 10%.
You may have a much more efficient house than me, or you're not
running as many things 24x7 as I do. The point being that I'm
wondering if it's really a drop of 40% usage.
If the bill just went down by 40%, but the usage only went down 10%
then maybe the power company gave you additional credit due to the
storm outages. Or maybe they started rating you as residential
instead of commercial. :)
I had a surprise in the 1980s when I moved to a much larger house. In
my previous abode, both the cooking and hot water were powered by
electricity, in the new one, both of those by gas, yet my electricity
bill shot up. What I realised was that I was using far more lighting,
and it all added up.
--
Paul Sture
Paul Sture
2011-09-10 13:37:18 UTC
Permalink
In article
Post by Rich Jordan
Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers). Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
My history here:

In 2005 I took the opportunity of a house move to drop my phone line and
DSL and went to cable. Since I'd had a commercial grade DSL package I
saved something like $150 per month.

My cable service did allow me to run a server, but after happily running
a PWS600a for several years, I got hit by a SYN attack and once I saw
the cost of a commercial grade firewall my thoughts turned to using a
hosting ISP instead.

I started out with a cheapish hosting package at first to see how I went
on and upgraded it to an SSH capable one a bit later. It's still less
than USD 25 a month for that. That package enables me to be a mini ISP
if I want - 100 MySQL databases, 100 domains, 100GB disk space. It's on
FreeBSD BTW.

The main advantage was that the noise and heat from my Alpha
disappeared, and it was starting to get flaky in hot weather anyway (no
aircon here). Having someone else manage uptime and placing it behind a
decent firewall was a bonus, but a real one - no longer do I see the
number of SQL injection attacks in my Apache logs as I used to.

However, I am now reviewing that setup with the advent of cheap VPS
solutions. With the initial learning curve out of the way I fancy
getting a bit more of the setup under my control.

If I were in your place, I'd be very tempted to just shove it all onto a
hosting ISP and spend more time on the Mopar. :-)
--
Paul Sture
ChrisQ
2011-09-10 17:53:52 UTC
Permalink
Post by Rich Jordan
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives. Console terminal is usually off. Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.
My wife was ecstatic over our power bill; it was down over 40%. I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did. It wasn't reduced A/C
usage either since it was a hot month here. Now she wants to know if
I really _really_ need to keep running a server at home, or at least
one that hungry.
Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question. I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts. Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers). Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
I have similar problems in terms of power consumption - energy gets
more outrageously expensive in the uk almost by the month. I work from
home mainly and the lab machines all run 24x7. One machine does double
duty of server and s/w dev workstation. A second machine runs windows
2000 (really) for legacy s/w and other tools. A third runs xp for office
and media applications. Yet another is an experimental box to evaluate
various versions of linux and open source tools for embedded work. Three
machines are 24/7, while the w2k box only gets powered up as needed.

For some years, since rip alpha, the server machine has been sun sparc,
because, apart from yearly maintenance, they are fit and forget. Also,
not being intel arch, they are immune from any wintel style viruses
that have occasionally caused problems on the windows machines. The
other advantage of solaris is the zfs filesystem, which is only available
elsewhere on freebsd and is not yet in kernel (fuse is in userland)
for linux. Have used volume manager for years, (metadb etc) but zfs
is so much more flexible and easier to set up, it is the only way for
the future.

The blade 1000 draws around 200 watts and the associated fc array about
another 200.The other two machines are both Proliant ml350 G4, which draw
around 150 watts each with single cpu and are very quiet, compared to
2u servers, for example, dl380 series. With lights, test gear and lab
standards, the total is around 1000 watts continuous. Probably pretty
small beer compared to business premises, but still a significant monthly
expense.

Don't see any easy way round this, at least affordably...

Regards,

Chris
JF Mezei
2011-09-10 18:51:18 UTC
Permalink
One of my poblems is my workstation. I have X widnows coming on from the
server and the alpha. If I put the workstation to sleep to save power,
it will lose those x windows because the workstation won't do the tcp
keealives.

I just added a graphics card to drive HDMI cables to my new TV. This
card alone can consume up to 150w !!!!! So while CPUs have gotten far
more efficient in power consumption, it appears that graphics cards have
gone the other direction and are now the major consumers of power.

(this graphics card draws too much power to get it from the pci express
bus, so it has its own power cable that plugs into a motherboard power
outlet.
MG
2011-09-10 19:37:59 UTC
Permalink
Post by JF Mezei
I just added a graphics card to drive HDMI cables to my new TV. This
card alone can consume up to 150w !!!!! So while CPUs have gotten far
more efficient in power consumption, it appears that graphics cards have
gone the other direction and are now the major consumers of power.
(this graphics card draws too much power to get it from the pci express
bus, so it has its own power cable that plugs into a motherboard power
outlet.
That certainly seems to be the 'trend' nowadays, especially with CUDA,
OpenCL and so on nowadays (not surprisingly). The role of the "GPU"
seems to become more and more disproportionate, especially when it's
used for things like decryption.

Out of curiosity, which brand and model is your graphics card? The
AMD/ATi Radeon HD5850 (PCI-E) I once installed into an Intel i5 PC
system also has its own power cable.

A bit off-topic: Not long ago I found a Radeon HD5450 (a lesser common
PCI model, normally being a PCI-E) and tried it in one of my rx2620s
and rx2600s. To my amazement, it worked! It gave me a snappy frame-
buffer under Win. XP V2003 IA-64 and under VMS I64 V8.4, I even got a
'glass terminal' out of it in my rx2620, which I found surprising!
(No DECwindows though, or not with the limited tries, as expected.)
I also tried it under Linux IA-64, but there I didn't get much out of
it, strangely enough. (Most of the issues were because of x86/-64
specific kernel code, particularly for memory access/addressing.)

- MG
ChrisQ
2011-09-10 21:16:50 UTC
Permalink
Post by MG
That certainly seems to be the 'trend' nowadays, especially with CUDA,
OpenCL and so on nowadays (not surprisingly). The role of the "GPU"
seems to become more and more disproportionate, especially when it's
used for things like decryption.
Out of curiosity, which brand and model is your graphics card? The
AMD/ATi Radeon HD5850 (PCI-E) I once installed into an Intel i5 PC
system also has its own power cable.
A bit off-topic: Not long ago I found a Radeon HD5450 (a lesser common
PCI model, normally being a PCI-E) and tried it in one of my rx2620s
and rx2600s. To my amazement, it worked! It gave me a snappy frame-
buffer under Win. XP V2003 IA-64 and under VMS I64 V8.4, I even got a
'glass terminal' out of it in my rx2620, which I found surprising!
(No DECwindows though, or not with the limited tries, as expected.)
I also tried it under Linux IA-64, but there I didn't get much out of
it, strangely enough. (Most of the issues were because of x86/-64
specific kernel code, particularly for memory access/addressing.)
- MG
I think there's a lot of nonsense talked about graphics cards. I don't do
gaming or 3d apps on any of these machines and for general X win work,
good 2d
performance is the thing that matters. The old dec graphics cards used to
be benchmarked in something called "X marks" and the powerstorm 3D30 was top
of the list, 2d, over far higher specced 3d capable cards. The old alpha
machine
had a 4d50 initially, iirc. A complex full length card with loads of
heatsink
etc, but the lower consumption 3D30 that replaced it was quite a bit
faster for
normal desktop work.

The current machines here have pretty vanilla cards - the blade has a
low end
sun "3d lite" card, while the others have a mixture of yesterdays model
half
length stuff like nvidia 280 and ati fire mv2250. None get more than a
bit warm
in operation.

Someone else mentioned hp-ux, but why not solaris ?. You could consider
it a more
robust and grown up version of linux and it is free to download and use...

Regards,

Chris
Bob Eager
2011-09-10 22:55:55 UTC
Permalink
Post by ChrisQ
Someone else mentioned hp-ux, but why not solaris ?. You could consider
it a more
robust and grown up version of linux and it is free to download and use...
I thought that had all gone, with the Oracle takeover...
--
Use the BIG mirror service in the UK:
http://www.mirrorservice.org

*lightning protection* - a w_tom conductor
Paul Sture
2011-09-11 09:52:36 UTC
Permalink
Post by Bob Eager
Post by ChrisQ
Someone else mentioned hp-ux, but why not solaris ?. You could consider
it a more
robust and grown up version of linux and it is free to download and use...
I thought that had all gone, with the Oracle takeover...
What was OpenSolaris has been moved to OpenIndiana.

I briefly tried both, but I don't think they like my graphics chip (an
onboard shared memory thing), as the display was unacceptably slow.

http://en.wikipedia.org/wiki/OpenIndiana
http://openindiana.org/
--
Paul Sture
Bob Eager
2011-09-11 11:54:12 UTC
Permalink
Post by Paul Sture
Post by Bob Eager
Post by ChrisQ
Someone else mentioned hp-ux, but why not solaris ?. You could
consider it a more
robust and grown up version of linux and it is free to download and use...
I thought that had all gone, with the Oracle takeover...
What was OpenSolaris has been moved to OpenIndiana.
I briefly tried both, but I don't think they like my graphics chip (an
onboard shared memory thing), as the display was unacceptably slow.
http://en.wikipedia.org/wiki/OpenIndiana http://openindiana.org/
That's interesting to know - thanks. I did try OpenSolaris at one point,
but only briefly - my roots precede BSD, with v6!
--
Use the BIG mirror service in the UK:
http://www.mirrorservice.org

*lightning protection* - a w_tom conductor
ChrisQ
2011-09-12 21:47:13 UTC
Permalink
Post by Bob Eager
Post by Paul Sture
Post by Bob Eager
Post by ChrisQ
Someone else mentioned hp-ux, but why not solaris ?. You could
consider it a more
robust and grown up version of linux and it is free to download and use...
I thought that had all gone, with the Oracle takeover...
What was OpenSolaris has been moved to OpenIndiana.
I briefly tried both, but I don't think they like my graphics chip (an
onboard shared memory thing), as the display was unacceptably slow.
http://en.wikipedia.org/wiki/OpenIndiana http://openindiana.org/
That's interesting to know - thanks. I did try OpenSolaris at one point,
but only briefly - my roots precede BSD, with v6!
Just to make the point that OpenSolaris is a separate o/s development
from the in house
sun effort. I did have a look at Opensolaris, but it's not, imho, quite
ready for
production use. What was sun and now oracle sol 10, is an excellent
product that just
works and of course has zfs in kernel. Both are available for free
download and my only
regret is that the login and other screens now have the oracle logo. At
least it is now
with a company with enough cash to ensure it's survival and develop it
further.

Sorry about the shameless plug, but have been using sun kit since the
early nineties
and in parallel with pdp11 through alpha. Would still have a ds20 or
similar in the rack
today if it weren't for the fact that it's so difficult to get later
software running
on it. The performance is more than adequate, even today...

Regards,

Chris
JF Mezei
2011-09-11 01:06:42 UTC
Permalink
Post by ChrisQ
I think there's a lot of nonsense talked about graphics cards.
To me, it is about the video memory. Old cards can't support 1920*1080
displays at high frame rates because they lack sufficient on board memory.

I don't really play games so the highest performing card isn't needed by
me (especially since some of the high end card cost upwards of $2000 !!!!)

Aside from gaming, those cards are of great help to architects and
designers who want to render their designs because much of the work is
done by the GPU.

There is a rumour that the GPU/graphics card may be moving to the
displays and use Thunderbolt as interconnect (this is basically
PCI-Express on a cable).

I'd like Apple "Activity Monitor" (think "Monitor" in VKS terms) to
include GPU activity as well as CPU activity.
John Wallace
2011-09-11 11:58:12 UTC
Permalink
Post by JF Mezei
Post by ChrisQ
I think there's a lot of nonsense talked about graphics cards.
To me, it is about the video memory. Old cards can't support 1920*1080
displays at high frame rates because they lack sufficient on board memory.
I don't really play games so the highest performing card isn't needed by
me (especially since some of the high end card cost upwards of $2000 !!!!)
Aside from gaming, those cards are of great help to architects and
designers who want to render their designs because much of the work is
done by the GPU.
There is a rumour that the GPU/graphics card may be moving to the
displays and use Thunderbolt as interconnect (this is basically
PCI-Express on a cable).
I'd like Apple "Activity Monitor" (think "Monitor" in VKS terms) to
include GPU activity as well as CPU activity.
I'm struggling to relate that set of requirements (~2Kx1K pixel or
better, decent refresh rate, no interest in gaming) with the need for
a 150watt graphic card. But then I've not kept up to date with the
rarified world of high end PC graphics.

I popped over to my default PC bits website and on the front page for
graphics cards was an ATI Radeon 5450 PCI-e card with 512MB of DDR2.
HDMI output at up to 1920x1200 (VGA at up to 2k x 1k5). I didn't
immediately see the refresh rate but I'd guess 70Hz or better?

All this for £23 (similar product at similar prices elsewhere).
Perhaps more importantly, fanless at only 20 watts.

What's the massive difference between the video electronics in a
consumer HDTV device such as a HD receiver or Blu-Ray player (which
are clearly dirt cheap and probably not fan cooled) and the video
electronics in a non-3D-focused PC graphics card ?

In particular what leads anybody to think 150 watts is appropriate let
alone necessary if there's no interest in dozens of shaders and the
like, when all it has to do is display a largely pre-rendered memory
image from the HD receiver or Blu-Ray player?

http://www.ebuyer.com/201166-asus-hd-5450-silent-512mb-ddr2-dvi-hdmi-vga-out-pci-e-low-eah5450-silent-di-512md2-lp-
http://uk.asus.com/Graphics_Cards/AMD_Series/EAH5450_SILENTDI512MD2LP/#specifications
http://www.amd.com/us/products/desktop/graphics/ati-radeon-hd-5000/hd-5450-overview/Pages/hd-5450-overview.aspx
JF Mezei
2011-09-11 18:31:33 UTC
Permalink
Post by John Wallace
I popped over to my default PC bits website and on the front page for
graphics cards was an ATI Radeon 5450 PCI-e card with 512MB of DDR2.
HDMI output at up to 1920x1200 (VGA at up to 2k x 1k5). I didn't
immediately see the refresh rate but I'd guess 70Hz or better?
The one I bought was a 5770. It can drive 3 displays. And there is
apple specific version for macs. Yeah, not all boards with same model
number are the same :-(

Not sure it consumes 150 watts just to display raster images. It
probably does when doing live rendering of 3d scenes.

One has to be careful about the cheap cards. i would have to do the math
to see if it really does have enough on board memory for 1080p image
size in 32 bit colour or whethger it woudl rely on RAM access (slower).
Post by John Wallace
What's the massive difference between the video electronics in a
consumer HDTV device such as a HD receiver or Blu-Ray player (which
are clearly dirt cheap and probably not fan cooled) and the video
electronics in a non-3D-focused PC graphics card ?
Yep, single purpose electronics. However, if you look at a playstation
or xbox, it also acts as a blueray player and they have very fancy
graphics cards.

In my case, it was a bit of future proofing my od mac, and buying a
model sanctioned by Apple to avoid software incompatilities.

BTW' it seems to run at 60hz (even though the TV runs at 240)
John Wallace
2011-09-11 21:51:31 UTC
Permalink
Post by John Wallace
I popped over to my default PC bits website and on the front page for
graphics cards was an ATI Radeon 5450 PCI-e card with 512MB of DDR2.
HDMI output at up to 1920x1200 (VGA at up to 2k x 1k5). I didn't
immediately see the refresh rate but I'd guess 70Hz or better?
The one I bought was a 5770. It can drive 3 displays.  And there is
apple specific version for macs.  Yeah, not all boards with same model
number are the same :-(
Not sure it consumes 150 watts just to display raster images. It
probably does when doing live rendering of 3d scenes.
One has to be careful about the cheap cards. i would have to do the math
to see if it really does have enough on board memory for 1080p image
size in 32 bit colour or whethger it woudl rely on RAM access (slower).
Post by John Wallace
What's the massive difference between the video electronics in a
consumer HDTV device such as a HD receiver or Blu-Ray player (which
are clearly dirt cheap and probably not fan cooled) and the video
electronics in a non-3D-focused PC graphics card ?
Yep, single purpose electronics. However, if you look at a playstation
or xbox, it also acts as a blueray player and they have very fancy
graphics cards.
In my case, it was a bit of future proofing my od mac, and buying a
model sanctioned by Apple to avoid software incompatilities.
BTW' it seems to run at 60hz (even though the TV runs at 240)
OK here's the math (first approximation).

2k x 1k5 pixels at 32 bits per pixel is 3Mpixels at 4 bytes per pixel
is 12 Mbytes.

Matrox used to do a quad display Millenium with 32 MByte per display.
That would be in the 1990s, I think.

So there's no shortage of pixel memory in any sensible recent card,
for a very generous definition of recent.

Modern graphics cards use lots of memory (and processing power) for
the 3D stuff, which is generally irrelevant to desktop apps, BluRay,
HDTV, etc.

Desktop apps, HDTV, BluRay, etc can be done perfectly adequately on a
modern ARM with suitable video silicon - indeed there are already ARM-
powered mobile phones with HDMI outputs. I bet they don't use many
watts. Motorola's (sorry, Google's) Droid 3 phone has a dual core 1GHz
ARM of some flavour, does 1080p capture and has an HDMI output which I
*assume* also does 1080p. I don't like assuming but the answer wasn't
immediately obvious.

There's a selection of SIMH/VAXs for Android too. I'd love to know how
fast they'd go on something like a decent ARM/Android phone. I do have
an Android phone but it's only a humble ZTE Blade and it mostly lives
in a cupboard (I reverted to Symbian for various reasons). But who
needs an Alphabook these days...
ChrisQ
2011-09-12 22:00:51 UTC
Permalink
Post by JF Mezei
Post by ChrisQ
I think there's a lot of nonsense talked about graphics cards.
To me, it is about the video memory. Old cards can't support 1920*1080
displays at high frame rates because they lack sufficient on board memory.
I don't really play games so the highest performing card isn't needed by
me (especially since some of the high end card cost upwards of $2000 !!!!)
The better graphics cards are always expensive, even s/h on Ebay. I
guess because
gamers must have the very best - quite competitive in a adolescent, consumer
electroncs sort of a way.

I do prefer a traditional screen layout to widescreen and it makes it easier
if you are using a single monitor and kvm switch. The lcd monitors often
have a much more limited sync range compared to some of the older tube
monitors, so it's easier if you can standardise on a single set of values
across all machines. Currently, all the machines here are running 1600x1200
x 60Hz, which seems good enough.

Progress is amazing though. It's not that long ago that I thought 1024 x 768
was the last word in clarity and resolution :-)...

Regards,

Chris
Single Stage to Orbit
2011-09-12 23:05:22 UTC
Permalink
Post by ChrisQ
Progress is amazing though. It's not that long ago that I thought 1024
x 768 was the last word in clarity and resolution :-)...
I was already griping about how limiting 640x350 displays were back in
the late 1990s ;)
--
Tactical Nuclear Kittens
Paul Sture
2011-09-13 11:32:28 UTC
Permalink
Post by ChrisQ
I do prefer a traditional screen layout to widescreen and it makes it easier
if you are using a single monitor and kvm switch. The lcd monitors often
have a much more limited sync range compared to some of the older tube
monitors, so it's easier if you can standardise on a single set of values
across all machines. Currently, all the machines here are running 1600x1200
x 60Hz, which seems good enough.
In contrast now that I've moved to using a wide screen I find I prefer
one. It did feel odd at first but I soon got used to it.
Post by ChrisQ
Progress is amazing though. It's not that long ago that I thought 1024 x 768
was the last word in clarity and resolution :-)...
I find it less tiring to read large chunks of text on today's monitors,
with the result that I'm printing less than I used to.
--
Paul Sture
John Wallace
2011-09-10 19:30:20 UTC
Permalink
Post by ChrisQ
Post by Rich Jordan
Due to work going on in the house (and updates to my server storage
tower that keep getting outprioritized) I've redirected the sites and
email going to the home server and taken it down for the last month.
Its a DS10L with internal disk and a 4 drive tower running 10KRPM SCSI
drives.  Console terminal is usually off.  Measured usage at full load
was around 220 watts for the Alpha and drives, around 200 idle.
Storage tower work was to replace the 3.5" drives with the little 2.5"
Savvio drives in a hotswap cage, which would reduce power usage about
8 watts per drive in testing.
My wife was ecstatic over our power bill; it was down over 40%.  I
don't see how removing that one continuous 220W load could have that
large of an impact (with A/C, full time blower, other computers left
on, appliances, etc), especially since where it is situated it can't
have a large impact on A/C runtime, but it did.  It wasn't reduced A/C
usage either since it was a hot month here.  Now she wants to know if
I really _really_ need to keep running  a server at home, or at least
one that hungry.
Since Communigate has been so slow at updating the VMS build, the
original reason for the server (followed by CSWS and PHP/Python play
and practice, which I can now also do at work) I have to admit its a
fair question.  I could replace the Alpha with a Mac Mini or some low
power draw PC running Linux, be able to run current software (albeit
in a less convivial work environment) and cut that power draw to 50-70
watts.  Or I could just drop it, play/practice on the test alpha at
work (in fact take mine in to work and use it that way, though it
wouldn't have public access or use except for brief tests) and drop
the commercial DSL link and save not only the power but close to $95/
month for the phone line and DSL (we also have cable internet that
doesn't support running servers).  Money that could go into other
hobbies that have been sadly neglected like the Mopar in my garage...
Not a fun choice.  I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started.  It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
I have similar problems in terms of power consumption - energy gets
more outrageously expensive in the uk almost by the month. I work from
home mainly and the lab machines all run 24x7. One machine does double
duty of server and s/w dev workstation. A second machine runs windows
2000 (really) for legacy s/w and other tools. A third runs xp for office
and media applications. Yet another is an experimental box to evaluate
various versions of linux and open source tools for embedded work. Three
machines are 24/7, while the w2k box only gets powered up as needed.
For some years, since rip alpha, the server machine has been sun sparc,
because, apart from yearly maintenance, they are fit and forget. Also,
not being intel arch, they are immune from any wintel style viruses
that have occasionally caused problems on the windows machines. The
other advantage of solaris is the zfs filesystem, which is only available
elsewhere on freebsd and is not yet in kernel (fuse is in userland)
for linux. Have used volume manager for years, (metadb etc) but zfs
is so much more flexible and easier to set up, it is the only way for
the future.
The blade 1000 draws around 200 watts and the associated fc array about
another 200.The other two machines are both Proliant ml350 G4, which draw
around 150 watts each with single cpu and are very quiet, compared to
2u servers, for example, dl380 series. With lights, test gear and lab
standards, the total is around 1000 watts continuous. Probably pretty
small beer compared to business premises, but still a significant monthly
expense.
Don't see any easy way round this, at least affordably...
Regards,
Chris
Are you familiar with virtualisation, at least as the x86 stuff goes?
Have you tried it and rejected it, in which case please accept my
apologies?

I've found VMware Player (zero cost) hosted under Windows satisfactory
most of the time, both at work and at home, with both Linux (Suse) and
Windows as guests. It's also available hosted under Linux, but I've
not yet used that (there are more obvious candidates). Not sure about
bsd as guest under VMware but it's trivial to try. With the Linux
guest under Windows host there is the odd occasional irritant (losing
my left mouse button, for example). I've also got a limited bit of
experience of VMware ESX (again, zero cost version) and really am
puzzled by some of the unexpected and unpleasant observed behaviours
(ridiculously slow IO, no explanation?) and consequently won't
recommend that. YMMV.

Are you ever likely to need to max out more than one of the x86 boxes
at any one time? If not, why not pick one box to keep, park the rest,
and try virtualising? Lots of preconfigured guest systems are freely
downloadable as ISOs or whatever from the VMware appliance marketplace
at www.vmware.com/appliances. Pick one close to your needs and take it
from there. Re-install over the guest just like you would on a real
machine if you don't like it. Or pick something small like Damn Small
Linux, have a bit of a play till you know what's what, and then re-
install anyway.

I never thought I'd be seeing *me* suggest using a HYPErvisor, but in
this case it may be worth a look.

Have a lot of fun (as they say in SuSe).
ChrisQ
2011-09-10 21:49:36 UTC
Permalink
Post by John Wallace
Are you familiar with virtualisation, at least as the x86 stuff goes?
Have you tried it and rejected it, in which case please accept my
apologies?
I've found VMware Player (zero cost) hosted under Windows satisfactory
most of the time, both at work and at home, with both Linux (Suse) and
Windows as guests. It's also available hosted under Linux, but I've
not yet used that (there are more obvious candidates). Not sure about
bsd as guest under VMware but it's trivial to try. With the Linux
guest under Windows host there is the odd occasional irritant (losing
my left mouse button, for example). I've also got a limited bit of
experience of VMware ESX (again, zero cost version) and really am
puzzled by some of the unexpected and unpleasant observed behaviours
(ridiculously slow IO, no explanation?) and consequently won't
recommend that. YMMV.
Are you ever likely to need to max out more than one of the x86 boxes
at any one time? If not, why not pick one box to keep, park the rest,
and try virtualising? Lots of preconfigured guest systems are freely
downloadable as ISOs or whatever from the VMware appliance marketplace
at www.vmware.com/appliances. Pick one close to your needs and take it
from there. Re-install over the guest just like you would on a real
machine if you don't like it. Or pick something small like Damn Small
Linux, have a bit of a play till you know what's what, and then re-
install anyway.
I never thought I'd be seeing *me* suggest using a HYPErvisor, but in
this case it may be worth a look.
It's something to consider, but I do like to keep s/w dev functionality
separate from normal office functions, for all kinds of reasons. Nagging
doubts about data security means that I would probably not bring myself
to depend on any x86 machine for critical work, so the choices would be
limited.

Something to have a play with in future though, as energy gets ever more
expensive. One alternative might be to build a chp plant running on gas,
which is still a fraction of the cost of electricity, in kwh equivalent.
Post by John Wallace
Have a lot of fun (as they say in SuSe).
Strange you should mention suse, as that's the distro that I settled on for
the s/w dev machine. Have evaluated 4 distros in the past few months,
Fedora, Debian,
Suse and Ubuntu. Debian was the initial choice, but it is a bit hair shirt
in terms of getting a gnu build environment in place and some of the
packages
are out of date, though it's clean and robust. Ubuntu irritated from the
start
of install, with too much decoration, garish colours and not enough control
over the install process. More like a windows substitute. Suse also has
a lot of
decoration, by default, but you have all the choices at install time.
Suse also
seems to be the most up to date in terms of, for example, special
packages needed
to build gcc and associated tools. I was able to get the latest (4.61)
gcc to
build with barely a warning, something I've not been able to do on other
Linux
distros. Desktop wise, I prefer Debian out of the box, they all use
gnome now
anyway, so it's just a setup issue. Still need to have a good look at
freebsd, as
it does have kernel zfs, a big plus.

Some good stuff out there these days, enough to keep busy for a lifetime...

Regards,

Chris
Paul Sture
2011-09-22 12:31:18 UTC
Permalink
Post by ChrisQ
Post by John Wallace
Are you familiar with virtualisation, at least as the x86 stuff goes?
Have you tried it and rejected it, in which case please accept my
apologies?
I've found VMware Player (zero cost) hosted under Windows satisfactory
most of the time, both at work and at home, with both Linux (Suse) and
Windows as guests. It's also available hosted under Linux, but I've
not yet used that (there are more obvious candidates). Not sure about
bsd as guest under VMware but it's trivial to try. With the Linux
guest under Windows host there is the odd occasional irritant (losing
my left mouse button, for example). I've also got a limited bit of
experience of VMware ESX (again, zero cost version) and really am
puzzled by some of the unexpected and unpleasant observed behaviours
(ridiculously slow IO, no explanation?) and consequently won't
recommend that. YMMV.
Are you ever likely to need to max out more than one of the x86 boxes
at any one time? If not, why not pick one box to keep, park the rest,
and try virtualising? Lots of preconfigured guest systems are freely
downloadable as ISOs or whatever from the VMware appliance marketplace
at www.vmware.com/appliances. Pick one close to your needs and take it
from there. Re-install over the guest just like you would on a real
machine if you don't like it. Or pick something small like Damn Small
Linux, have a bit of a play till you know what's what, and then re-
install anyway.
I never thought I'd be seeing *me* suggest using a HYPErvisor, but in
this case it may be worth a look.
It's something to consider, but I do like to keep s/w dev functionality
separate from normal office functions, for all kinds of reasons. Nagging
doubts about data security means that I would probably not bring myself
to depend on any x86 machine for critical work, so the choices would be
limited.
Something to have a play with in future though, as energy gets ever more
expensive. One alternative might be to build a chp plant running on gas,
which is still a fraction of the cost of electricity, in kwh equivalent.
That does look interesting. WHen I was in the UK a neighbour in the pet
food trade who ran a lot of freezers looked at every way he could to
save on his electricity bill, but simply couldn't beat the National Grid
prices by generating his own. Mind you, that was back in the
1980s/1990s, when electricity prices were more stable.
Post by ChrisQ
Post by John Wallace
Have a lot of fun (as they say in SuSe).
Strange you should mention suse, as that's the distro that I settled
on for the s/w dev machine. Have evaluated 4 distros in the past few
months, Fedora, Debian, Suse and Ubuntu. Debian was the initial
choice, but it is a bit hair shirt in terms of getting a gnu build
environment in place and some of the packages are out of date, though
it's clean and robust.
Debian also offers a fairly minimum LAMP configuration out of the box
and you have to do more work on it than other distributions to get your
favourite CMS up and running.
Post by ChrisQ
Ubuntu irritated from the start of install, with too much decoration,
garish colours and not enough control over the install process. More
like a windows substitute.
When trying to dual boot with Windows, the standard Ubuntu install is
geared to systems with one disk. It will try to shrink your current
Windows partition and put itself in the newly free up space. It takes
some determination to put it on a second disk _and_ you have to fiddle
with the boot environment yourself.

I agree on the garish screens. It took me way too long to discover how
to change the login screen background, and this was not helped by the
way the setup here appears to change from release to release, rendering
many of the answers that Google comes up with useless.

The trick with Ubuntu is to get the Alternate Installation DVD, which is
hidden away on their website. You have to know it exists and then try to
find it. This allows everything from a simple install to "Expert mode",
but beware that in Expert Mode, you can face a lot of questions, and in
the case of mirror names, it offers no defaults to select from.
Post by ChrisQ
Suse also has a lot of decoration, by default, but you have all the
choices at install time. Suse also seems to be the most up to date in
terms of, for example, special packages needed to build gcc and
associated tools. I was able to get the latest (4.61) gcc to build
with barely a warning, something I've not been able to do on other
Linux distros. Desktop wise, I prefer Debian out of the box, they all
use gnome now anyway, so it's just a setup issue. Still need to have
a good look at freebsd, as it does have kernel zfs, a big plus.
I too like openSUSE, though the (default) KDE desktop is so packed with
animations and other cute stuff that I zapped it straight away and went
for Gnome instead.

I would recommend having a look at virtual solutions such as VirtualBox.
This makes it really easy to install the whole gamut of Linux or *nix
systems on one box and you don't need to muck around with dual or triple
boot issues. GRUB2 may be fine for professional system builders but
it's a real pain in the neck when you are stuck at a boot prompt and
need to move to another system to learn how to use it. Again, do this
in a virtual machine, and you don't have this worry.
--
Paul Sture
JF Mezei
2011-09-22 18:00:21 UTC
Permalink
Post by Paul Sture
That does look interesting. WHen I was in the UK a neighbour in the pet
food trade who ran a lot of freezers looked at every way he could to
save on his electricity bill, but simply couldn't beat the National Grid
prices by generating his own. Mind you, that was back in the
1980s/1990s, when electricity prices were more stable.
When Apple unveiled the flying-sauver like plans for its new campus in
Cuppertino, it included a large gas powered electric generating plant.
Jobs argued in front of the Cupertino city council that not only would
this reduce the load on the local grid, but it would also be much
cleaner energy than what is mostly coal fired plants that feeds the area.

Shows a pragmatic Apple. I would have expected solar panels and wind farm.
Paul Sture
2011-09-11 12:33:43 UTC
Permalink
In article
Post by John Wallace
Are you familiar with virtualisation, at least as the x86 stuff goes?
Have you tried it and rejected it, in which case please accept my
apologies?
I've found VMware Player (zero cost) hosted under Windows satisfactory
most of the time, both at work and at home, with both Linux (Suse) and
Windows as guests. It's also available hosted under Linux, but I've
not yet used that (there are more obvious candidates). Not sure about
bsd as guest under VMware but it's trivial to try. With the Linux
guest under Windows host there is the odd occasional irritant (losing
my left mouse button, for example). I've also got a limited bit of
experience of VMware ESX (again, zero cost version) and really am
puzzled by some of the unexpected and unpleasant observed behaviours
(ridiculously slow IO, no explanation?) and consequently won't
recommend that. YMMV.
I have tried both VMware Player and VMware Workstation on both Windows
and Ubuntu hosts. Both resulted in the disk activity light on solid for
prolonged periods of time, and monitoring file activity on the host
revealed that VMware was hammering the pagefiles it creates for the
client. This is a "feature" of VMware; it can support more client
memory than you have RAM, but with today's RAM capacities ad prices, I'd
rather throw more RAM to the problem.

From:

http://blogs.vmware.com/vmtn/2007/08/top-10-things-y.html

"Top 10 things you can do with VMware Fusion and your Mac

...

Reduce, reuse, recycleŠyour RAM. VMware pioneered memory page file
sharing. So running a VM in VMware Fusion takes up much less of your
Mac¹s memory than other virtualization products. And it gets better the
more VMs you¹re running at once. Five Windows XP virtual machines at a
time doesn¹t mean 5x the memory of a single XP virtual machine. By
sharing the sections of memory that are common between the VMs‹like with
common OSs‹ you can ³over commit² memory."

I did come across a suggestion that you can turn VMware's paging off,
but I didn't find that until my 30 day trial of VMware Workstation had
expired.

Using Ubuntu as the host had me going back to Windows 7 as a host. A
combination of I/O and the scheduling system wasn't up to the job. My
trial period clock was ticking at that point so I'll admit I took the
easy way out.

I have since been using VirtualBox (the PUEL version from Oracle rather
than open version (PUEL stands for Personal Use and Evaluation License)).

http://www.virtualbox.org/wiki/VirtualBox_PUEL

Unlike VMware products I have to stick with the physical RAM, but with
4GB I can run 3 or 4 guests* under Windows 7 (64 bit FWIW). The end
result is satisfactory performance for the most part. Intensive I/O
does kill, so I try to avoid using the clients while a host backup is
running, for example.

I don't see any of the irritations with losing mouse buttons etc that
you mention.

When I started with VortuaBox a year or so ago you had to install the
Guest Additions package on the client to enable copy and paste, folder
sharing and mouse movement outside the client window. At some point
since then, certain Linux distributions recognize the host as VirtualBox
at installation time and do this for you.

(* Windows Server 2008 runs like a dog with 512 MB RAM allocate, but
nicely with 800 MB. Likewise, Debian Server ran nicely with 512 MB
until I loaded Drupal, whereupon it turned into a dog; raising it to 800
MB solved that problem.)
Post by John Wallace
Are you ever likely to need to max out more than one of the x86 boxes
at any one time? If not, why not pick one box to keep, park the rest,
and try virtualising? Lots of preconfigured guest systems are freely
downloadable as ISOs or whatever from the VMware appliance marketplace
at www.vmware.com/appliances. Pick one close to your needs and take it
from there. Re-install over the guest just like you would on a real
machine if you don't like it. Or pick something small like Damn Small
Linux, have a bit of a play till you know what's what, and then re-
install anyway.
I haven't used any off the shelf appliances, but it reminds me of
another niggle with the VMware products. They assist during the
installation of a client by filling in some of the dialogue for you, but
sometimes this is "too helpful". The scripting for that is done in
Perl, so unless you are already familiar with it, you have another
learning curve.

The VNware products also neatly eject the CD/DVD image after an
installation for you, but in the case of Debian, which wants to access
the installation media to install extra products, it's a devil of a job
getting access to it back. You really need to burn a physical CD or
DVD. VirtualBox has an advantage here in that you can change the CD
media device on the fly to point to an image file or physical device.
Post by John Wallace
I never thought I'd be seeing *me* suggest using a HYPErvisor, but in
this case it may be worth a look.
Have a lot of fun (as they say in SuSe).
My latest trial is with openSUSE (they keep changing the capitalisation
of the name), and so far I like it. It's the only Linux distro I have
come across so far which gives me my desired combination of date and
time formats, perhaps not too surprising given that it started out life
as a German product ;.) I'm not too keen on the time it takes to
rewrite a bunch of configuration files every time you change the setup,
but that's something I recall from using it a decade ago, when you did
that stuff from the command line (after manually editing config files).
I can live with it.

Oh, I nearly forgot. I couldn't get the driver for ODS5 to install here,
so downloaded the preconfigured Tinycore 3.2 to run i a virtual machine
instead.

Thanks to virtual machines, I no longer have to deal with multiple boot
systems, nor GRUB, nor GRUB 2. Hooray for that!
--
Paul Sture
Phillip Helbig---undress to reply
2011-09-21 19:56:32 UTC
Permalink
In article
Post by Rich Jordan
Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
What is the actual cost per month?

I once worked out that the power cost of running my cluster full time is
about EUR 120 per month. A typical smoker will spend more than that on
cigarettes (at least around here); this is less than the additional cost
of driving, say, a VW Passat rather than a Skoda Fabia. In other words,
it is measurable but not a HUGE cost. To me, it is worth it. Also
consider the price tag you put on your own time.
Robert Doerfler
2011-09-22 08:45:17 UTC
Permalink
Post by Paul Sture
In article
Post by Rich Jordan
Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
What is the actual cost per month?
I once worked out that the power cost of running my cluster full time is
about EUR 120 per month. A typical smoker will spend more than that on
cigarettes (at least around here); this is less than the additional cost
of driving, say, a VW Passat rather than a Skoda Fabia. In other words,
it is measurable but not a HUGE cost. To me, it is worth it. Also
consider the price tag you put on your own time.
I bought some powermeter recently and measured some of my boxes at idle:

VAX 4000/vlc ~40W
VAX 4000/60 ~67W
ALPHA-PC 164LX ~90W
PWS433au ~90W
zx20000 ~230W-260W

Some more will follow soon. Running the 4000/vlc at 24/7 would cost
about 7 Euros/month, pws433au ~15 Euro, zx2000 ~40 Euro. So it is not
extremly expensive. I guess someone could easily safe the money
elsewhere ;-)

Greetings,

Robert
Paul Sture
2011-09-22 12:07:19 UTC
Permalink
On 2011-09-21, Phillip Helbig---undress to reply
Post by Paul Sture
In article
Post by Rich Jordan
Not a fun choice. I've had a VMS system running full time at home
since around 2002 (a VAXstation before that) until the recent work
started. It won't feel right to not have it but I just don't get to
use it enough, and it is costing.
What is the actual cost per month?
I once worked out that the power cost of running my cluster full time is
about EUR 120 per month. A typical smoker will spend more than that on
cigarettes (at least around here); this is less than the additional cost
of driving, say, a VW Passat rather than a Skoda Fabia. In other words,
it is measurable but not a HUGE cost. To me, it is worth it. Also
consider the price tag you put on your own time.
VAX 4000/vlc ~40W
VAX 4000/60 ~67W
ALPHA-PC 164LX ~90W
PWS433au ~90W
zx20000 ~230W-260W
Some more will follow soon. Running the 4000/vlc at 24/7 would cost
about 7 Euros/month, pws433au ~15 Euro, zx2000 ~40 Euro. So it is not
extremly expensive. I guess someone could easily safe the money
elsewhere ;-)
When I was running a cluster (Vaxstation 3100 and two PWS 600au systems)
plus assorted other pieces of kit my electricity bill was lumped in with
the standard monthly charge for maintenance and heating etc. I got a
surprise at the end of the first year when they read the meter and I had
to pay a further ~350 Euro. As a single person I had previously had a
refund on my electricity bill at the end of the year.
--
Paul Sture
JF Mezei
2011-09-22 17:52:05 UTC
Permalink
For the vintage equipment that VMS runs on, are there differences in
power consumption between idle and CPU running at 100% ?

I know that current 8086 servers and computers and especially laptops
have firmware (or is it in the OS) that shuts down unused cores and gets
CPU to slow down to consume much less power, but is any of that
implenmented in IA64, Alpha or VAX ?
Rich Jordan
2011-09-22 18:20:25 UTC
Permalink
Post by JF Mezei
For the vintage equipment that VMS runs on, are there differences in
power consumption between idle and CPU running at 100% ?
I know that current 8086 servers and computers and especially laptops
have firmware (or is it in the OS) that shuts down unused cores and gets
CPU to slow down to consume much less power, but is any of that
implenmented in IA64, Alpha or VAX ?
Not sure about CPU usage causing measurable power usage differences,
but drives sure do. When my AS600 was idling it used a fair amount
less power than when the multiple 3.5" 7200RPM drives in it or
attached to it were busy. They were always spinning, so it was actual
usage driving up the power requirements.

I don't think I ever tried measuring power draw while idling or
hammering the CPU without storage involvement though.
glen herrmannsfeldt
2011-09-22 18:42:00 UTC
Permalink
Post by JF Mezei
For the vintage equipment that VMS runs on, are there differences in
power consumption between idle and CPU running at 100% ?
Not so easy to answer. CMOS naturally consumes less power when
it is doing less computing, though not quite as much less as shutting
off cores.
Post by JF Mezei
I know that current 8086 servers and computers and especially laptops
have firmware (or is it in the OS) that shuts down unused cores and gets
CPU to slow down to consume much less power, but is any of that
implenmented in IA64, Alpha or VAX ?
For VAX, the CPU might not be the biggest power user, and other
parts might not change much. I suppose it will still do memory cycles
even in the idle loop. (Well, it might come from cache, so maybe
not much from memory.)

The number of transistors is going up faster than the power per
transistor goes down. That is for CPU and memory.

-- glen
Bob Koehler
2011-09-23 13:28:21 UTC
Permalink
Post by glen herrmannsfeldt
For VAX, the CPU might not be the biggest power user, and other
parts might not change much. I suppose it will still do memory cycles
even in the idle loop. (Well, it might come from cache, so maybe
not much from memory.)
Typical memory circuits of that era did not constantly draw power,
but they had to be refreshed at a fairly rapid pace. The power
supplies would not know which pages were on the free page list, all
of RAM had to be maintained (and sometimes fage faults are satisfied
from the free page list).

But the biggest draw may have been the fixed speed cooling fans and
disk drive motors. Electric motors can draw power rivaled only by a
dead short.
Rich Jordan
2011-09-23 14:24:48 UTC
Permalink
Post by glen herrmannsfeldt
For VAX, the CPU might not be the biggest power user, and other
parts might not change much.  I suppose it will still do memory cycles
even in the idle loop.  (Well, it might come from cache, so maybe
not much from memory.)
   Typical memory circuits of that era did not constantly draw power,
   but they had to be refreshed at a fairly rapid pace.  The power
   supplies would not know which pages were on the free page list, all
   of RAM had to be maintained (and sometimes fage faults are satisfied
   from the free page list).
   But the biggest draw may have been the fixed speed cooling fans and
   disk drive motors.  Electric motors can draw power rivaled only by a
   dead short.
The P/S fans (qty 2) in my VS3100-30 are 12VDC, 0.13A rated
(presumably that is continuous draw). Nice 22 year old 21+ years
continuous use ball bearing made in Japan fans. Not sure how much
that big honking heat-sinked power resistor in there draws.
glen herrmannsfeldt
2011-09-23 17:17:51 UTC
Permalink
Post by Bob Koehler
Post by glen herrmannsfeldt
For VAX, the CPU might not be the biggest power user, and other
parts might not change much. I suppose it will still do memory cycles
even in the idle loop. (Well, it might come from cache, so maybe
not much from memory.)
Typical memory circuits of that era did not constantly draw power,
but they had to be refreshed at a fairly rapid pace. The power
supplies would not know which pages were on the free page list, all
of RAM had to be maintained (and sometimes fage faults are satisfied
from the free page list).
Well, ECL runs at constant current, with the power drawn pretty much
independent of the logic operation being done.

But yes, magnetic core should draw power mostly when actually
switching cores, in addition to the power for the driver circuits.
Post by Bob Koehler
But the biggest draw may have been the fixed speed cooling fans and
disk drive motors. Electric motors can draw power rivaled only by a
dead short.
-- glen
Bob Koehler
2011-09-26 13:28:16 UTC
Permalink
Post by glen herrmannsfeldt
But yes, magnetic core should draw power mostly when actually
switching cores, in addition to the power for the driver circuits.
I've never seen a VAX with core.
Johnny Billquist
2011-09-26 15:20:03 UTC
Permalink
Post by Bob Koehler
Post by glen herrmannsfeldt
But yes, magnetic core should draw power mostly when actually
switching cores, in addition to the power for the driver circuits.
I've never seen a VAX with core.
I don't think it ever existed. There was core for PDP-11s, but by 1977,
MOS memories were taking over.

Johnny

John Reagan
2011-09-22 19:59:58 UTC
Permalink
Post by JF Mezei
For the vintage equipment that VMS runs on, are there differences in
power consumption between idle and CPU running at 100% ?
I know that current 8086 servers and computers and especially laptops
have firmware (or is it in the OS) that shuts down unused cores and gets
CPU to slow down to consume much less power, but is any of that
implenmented in IA64, Alpha or VAX ?
Yes. Definitely for IA64 and Alpha. For multi-cpu VAXen, probably not in
the OS. Perhaps the console firmware can do something if you disable the
CPU statically prior to boot.
Continue reading on narkive:
Loading...