Announcement

Collapse
No announcement yet.

Okay, I'm ready to set up a network with a Ubuntu file server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Okay, I'm ready to set up a network with a Ubuntu file server

    The idea is not some fancy backup scheme- I just don't want Windows in charge of my files. While there are some mechanical failures, I have come to believe that most hd problems are caused by Windows. Especially when your boot drive no longer boots. In fact I was thinking of using my old computer which seems to have some Windows bugs in it as the file server, but running Ubuntu. I have been getting a few BSOD's which might have been related to some dust blocking airflow on my CPU heatsink. (Been there, done that.) At least in the past that doesn't seem to have damaged the CPU.

    I have a wireless router I plan to use with 3 or 4 ethernet jacks to connect to various Windows machines, which will have plenty of local hd space for working on audio and maybe video files. But anything important will be copied to Mr. Ubuntu who will be protecting it from Mr. Gates.

    So I am looking for some kind of Ubuntu for Dummies advice to get this thing rolling.

    Thanks!

    Steve Ahola
    The Blue Guitar
    www.blueguitar.org
    Some recordings:
    https://soundcloud.com/sssteeve/sets...e-blue-guitar/
    .

  • #2
    Hi Steve. I have been out of the loop for a while. Are you still interested in this project? Have you gotten started?

    I think you've got a good idea to set up a file server that doesn't allow Windows to be in charge of your files. I have run into numerous problems with Windows' lazy disk writes, so I've been using several linux-based SMB (Samba) servers to host the data files on my network. Using SMB, they appear as a regular windows share. This makes windows file sharing work across the network very easy, while keeping windows out of the back-office part of the network that actually involves responsibility for the files.

    With linux, there are many ways to skin the cat, so you have lots of options. How to build the server will depend upon the PC resources that you are able to dedicate to the project, whether you want a command line interface or a GUI, etc. There are many good solutions so you have lots of options to choose from.

    Personally, I've migrated away from Ubuntu Server because of security issues that I've discussed here previously. (Remeber the thread where I complained that Ubuntu Server was phoning home to England every time that the root user logged into the system?). Right now I've migrated away from Ubuntu-based distributions entirely. I prefer RedHat / CentOS, primarily because it's extensively documented and extremely stable, and doesn't have this trojan-like behavior. Red/Cent is not the bleeding-edge desktop distribution that Ubuntu is -- instead, it's the rock-solid, stable server distribution. There are an abundance of great how-to docs at RedHat that you can use with the free CentOS variant. The last time that I set up a Samba server I used CentOS, and I had the entire turnkey Samba Server setup up and running from bare metal within an hour. Compare that to a manual, text-based configuration for an SMB server which takes forever. IMO the CentOS minimal GUI installation for servers, with the WebMin interface, is the way to go.

    Of course, I have to admit that the reason I logged in today was to follow up with RG on what's happened to Solaris. Although I've got a perfectly good CentOS server dishing up the music files for my LAN, I've been giving serious thought to converting to something that supports ZFS. The problem is that OpenSolaris is now DEAD.

    If you need guidance, I might be able to help you. Just remember -- my advice will be worth every penny that you pay for it.
    "Stand back, I'm holding a calculator." - chinrest

    "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

    Comment


    • #3
      "GUI lite" would be my preference. Is there some version of CentOS that I should be looking for? It does support USB 2.0 external hard drives- right? I have about a dozen of them that I need it to handle.

      Thanks!

      Steve

      P.S. I will PayPal you a nickel!
      The Blue Guitar
      www.blueguitar.org
      Some recordings:
      https://soundcloud.com/sssteeve/sets...e-blue-guitar/
      .

      Comment


      • #4
        Originally posted by Steve A. View Post
        "GUI lite" would be my preference.
        Then you want to perform the server installation, and choose the minimal server GUI option.

        Is there some version of CentOS that I should be looking for?
        Steve, the current release of CentOS is version 5.5. There is a new 6.0 release that is supposed to be on the way, but it doesn't offer anything that you should need to set up a file server. Which version you install from isn't that important -- you can always update the system to the most current release, regardless of what you install from. One of the strengths of CentOS is that this kind of upgrade doesn't break things, the way that upgrading some other popular desktop Linux distributions tend to break things that used to work every time that you upgrade. Ubuntu has hosed me that way more times than I care to remember.

        Once you decide on the CentOS release number (I'd just go with 5.5), there's no separate server or desktop installation media that you have to download. You just download the installation media and the installer asks you what kind of system you want to set up. You can do the installation with a DVD, a stack of CDs, or a single CD that installs over the web. I've done it all 3 ways. If you plan to build multiple boxes, the DVD or stack of CD media makes sense. For a one-box server install, the single network install CD makes sense, as it avoids the unnecessary downloading of packages that you won't ever use. In your situation, I'd probably download the 10 MB network install CD, and let it download only the packages that you actually need during your installation. The only decision that you need to make is whether your system architecture is 32-bit or 64-bit, then you have to download the appropriate installation CD image.

        It does support USB 2.0 external hard drives- right?
        Yes. The Linux kernel "supports" USB 2.0 drives. But let's clarify what you mean by "supports."

        The important question to ask is not whether the drives are supported (they are). The important question from a user standpoint is whether the system auto-mounts the drives (recognizes them automatically, aka "Plug and Play") or whether you have to manually issue a command to mount the drives when you plug them in.

        I have been using CentOS 5.5 on a desktop, and it has no problems recognizing USB 2.0 keychain flash drives when I plug them in. It recognizes them in a plug and play fashion just like Windows does. (That's more than I can say for the current version of Fedora (14) where this feature seems to be broken.)

        That said, I've never actually tried plugging USB drives into a server that doesn't have the GUI interface installed, so I can't speak from experience on whether or not a basic non-GUI command line setup would auto-recognize external USB drives. Auto-mounting of the drives could be a desktop GUI function -- I'm not sure -- I'd have to look that up. Even if the auto-mount function is provided by the desktop environment, it's no big deal to issue a one-line command on the command line to tell the system to mount the drive you've just plugged in. When I'm working from the command line interface I don't rely on auto-mounting. I just use the keyboard commands to mount drives anyway. But then I'm someone who is comfortable without having the GUI interface.

        But if you need a GUI, you'll be installing the minimal server GUI and you should have all of the packages installed that are necessary to automatically recognize USB drives. If the installation that you perform should happen to not automatically recognize them, then you'd just have to install a couple of more packages to make it work. In the worst case scenario, you'd have to issue a single command to prompt the system to mount the drive. I think that this is nothing to worry about.

        I have about a dozen of them that I need it to handle.
        That's one potential source for problems -- having a constantly changing configuration, where drives are continuously inserted and removed works great on a desktop (where you're in attendance before the PC monitor and keyboard), but tends to be less desirable for a server (which is typically designed to function without a monitor and keyboard or direct user input).

        Servers are typically built so that they don't require user interaction to make them do their job. With a server, you just load up all of the information and stick the machine in a remote location where it's out of the way, and it automatically performs it's jobs on the network. If you're constantly plugging and unplugging external drives, you're not really building a server -- you're really building another box with a desktop GUI that happens to be running Linux. IMO that's not the best design. Having another desktop box perpetuates the current paradigm where you're required to constantly interface with the GUI, rather than eliminating this interaction requirement and allowing the server to transparently take care of all of the mundane tasks for you, without any action on your part.

        The one thing that could cause potential confusion about your proposed installation is the number of drives that you'll be dealing with, and whether or not you're going to leave them permanently connected (good) or if you're planning on constantly swapping them (not as good). In the interest of stability, it's best to leave the drives attached once everything is properly configured. That decreases the risk of having something not work when you start changing the system configuration on the fly.

        How much capacity do those 12 drives have? If I were doing this, I'd definitely buy a few large drives (2 TB or so), permanently install them in the server, move all of the files over to the new drives, and then get rid of the array of external drives. IMO keeping the data on a dozen external USB drives unnecessarily complicates the setup, and the low reliability of external USB drives is a problem -- using them for archival is just asking for trouble. Those external USB drives are typically not all that robust, and using a dozen of them creates multiple points for a failure in the system.

        BTW, let me know if you want more or less information in the posts. I don't want to overload you.
        "Stand back, I'm holding a calculator." - chinrest

        "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

        Comment


        • #5
          A couple of extra thoughts --

          * If you're not at all familiar with Linux, then maybe diving into a server build would be a real trial by fire. It could be biting off more than you could chew. Maybe starting off with a linux desktop would be a better idea. (Less steep learning curve).

          * If you're having drive problems, you need to consider both the reliability of the drive itself and the filesystem that you're running on it. Sometimes USB drives can be very low reliability drives. Sometimes your choice of filesystem can make or break a drive's reliability and performance. You definitely want to use a journaling file system. It's possible your problems could be caused by the drives themselves, or the filesystems on them, so migrating from Windows to Linux might not solve your problem.

          * Another option for you might be moving your files to a Network Attached Storage appliance. These are essentially pre-packaged/turnkey Linux or BSD server boxes.

          * If you want to do a DIY build of a NAS box using and old PC, check out FreeNAS. It uses a web GUI for setup and is designed to be easy to use.
          "Stand back, I'm holding a calculator." - chinrest

          "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

          Comment


          • #6
            The drives I have are almost all 1TB, and are mounted in SanDigital enclosures (which are like small computers). I normally leave them all connected and want them to keep the same letter. (With Windows if I have my iPhone charging when I reboot it will usurp one of the drive letters, screwing everything up.)

            Each workstation will have its own local drives and if I will be plugging in a USB drive temporarily it will be at the workstations not the server.

            Thanks

            Steve
            The Blue Guitar
            www.blueguitar.org
            Some recordings:
            https://soundcloud.com/sssteeve/sets...e-blue-guitar/
            .

            Comment


            • #7
              if you have 1 TB drives then you definitely want to keep them. but bear in mind that even with high density 1 TB drives, there are different grades of drives. without knowing the exact type of drives you've got, its not possible to determine how robust they are. to be conservative, i would treat the drives as consumer grade archival storage devices, and avoid trying to use advanced enterprise-class features like RAID on them. many people have quickly destroyed inexpensive consumer grade drives by trying to use them in enterprise class applications. RAID can be really hard on "green" drives. it's best to avoid it. hobbyists misusing drives in this way has given the "green" drives undeserved reputations for being unreliable.

              using the drives as plain old archival storage devices will help prolong their lifespan, and keeping them permanently connected to your server will make the system mapping far more consistent. if you've got a dozen USB drives then you'll need a LOT of USB ports. i'm assuming that you already have enough of them, or that you'll be buying interface cards to add the extra USB ports that will be needed to keep all of the drives attached at the same time..

              if you want to keep the array of discs formatted as individual drives, and configured as individual shares in windows, then you're going to have to deal with knowing where your data resides, and choosing the proper drive letter in windows. Having a dozen drives is pretty cumbersome to me.

              one thing that linux offers, that you may not have considered, is Logical Volume Management. with LVM its possible to refer to your entire array of USB drives as one large logical volume. that way, the dozen 1 TB drives appear as 1 large 12 TB drive. you could use one drive letter to access all of the disks. the downside to this is that to get this level of simplicity/functionality on the windows side requires quite a bit of configuration effort on the linux side. i don't know how hard of a problem you're willing to tackle. the more complex the system design becomes the more you can simplify the user interface, but the more complex the setup becomes.

              do you want the setup to be easy, or do you want to jump through some flaming hoops in a more difficult setup to gain a cleaner look at the user side of the interface? your first decision has to be related to how difficult a first time experience you're willing to tackle.

              maybe your best bet is to set up a conventional linux box first, and learn your way around it, before tackling a complicated LVM server setup.
              "Stand back, I'm holding a calculator." - chinrest

              "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

              Comment


              • #8
                Steve, I'm moving my reply to your comments in the other thread here. Just trying to not to hijack the other thread.

                Originally posted by Steve A. View Post
                Hmmmm... I've been buying the Green drives ever since they came out. I found that they dropped the ambient temperature in my computer room about 5 to 10 degrees in the summertime! I do use them mainly for archival purposes, but I will make a point of not using them for projects I am actively working on. No problem- I can just copy the source files over to a regular HD and edit them there. Although most of the editing happens in memory-actual and virtual. I do need to make sure that I use normal drives for the temporary files that Adobe Audio uses (the defaults are the drives with the most free space).
                I had thought that the Green drives just ran the disk at a slower speed to save energy- I didn't know about the head parking thing. (I thought that heads were parked only when you shut off the drive.)

                Steve

                The green drives use several approaches to decrease energy consumption, such as aggressive head parking and the other thing that you mentioned, varying the platter rotation speed on demand.

                Steve, you should check the firmware revision on your Green drives and make sure that you have the latest suitable update. With the Seagate LP (low power) drives there are firmware updates, but I think that the WDC Green drives may not have firmware updates available. WDC has alienated a lot of their user base because of that.

                The lack of firmware updates isn't really a problem as long as your park count isn't being driven to astronomical levels through mis-application of the drives. Watch out for them to start clicking on you. On the drive forums they talk about the "clicks of death" that occur just before the drives fail. I've heard people say that if hear clicking then you immediately need to back up the drives before data loss occurs and send them in for warranty replacement.

                One thing that you might want to look into is the "wdidle3" utility. It's a DOS command line utility that allows you to change the head park interval on WD green drives. By changing the 8-second interval to a more realistic level you can achieve a reasonable amount of power savings through head parking without the unreasonable wear on the drives that goes with too frequent head parking.

                As we had discussed in your server thread, the Green drives are OK for archival/storage purposes, where they are only occasionally written to / read from. From a robustness standpoint, they are nowhere nearly as durable as the Black and Blue drives. That's not to say that the Green drives are bad. I use them in a backup server that powers on once daily, copies data, and shuts down. Even with the default head parking interval, I able to keep the head parking activity and the drive wear low. Using the Green drives in any sort of RAID setup or ZFS setup would probably be the kiss of death for them. You really need the expensive, enterprise class drives for that sort of thing.

                It's interesting that you mention that your ambient temps have dropped several degrees as a result of using the green drives. If that's the case, the energy savings in the summertime may justify the shorter lifespan of the drives. One thing that most people don't consider is controlling the individual drives spin status. With Linux you can set the spin-down interval for each drive, so you have a lot of ability to fine-tune these parameters to keep energy costs low.
                Last edited by bob p; 02-01-2011, 08:33 PM.
                "Stand back, I'm holding a calculator." - chinrest

                "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

                Comment


                • #9
                  Steve, are you still interested in this project? Based on your other posts I can see that you're busy shuffling files around. Just wondering if this project is still on your to-do list.
                  "Stand back, I'm holding a calculator." - chinrest

                  "I happen to have an original 1955 Stratocaster! The neck and body have been replaced with top quality Warmoth parts, I upgraded the hardware and put in custom, hand wound pickups. It's fabulous. There's nothing like that vintage tone or owning an original." - Chuck H

                  Comment


                  • #10
                    Originally posted by bob p View Post
                    Steve, are you still interested in this project? Based on your other posts I can see that you're busy shuffling files around. Just wondering if this project is still on your to-do list.
                    Yes, I am- that is why I am moving the files around. I seem to move at glacial speeds on most of my projects... <g>

                    Steve
                    The Blue Guitar
                    www.blueguitar.org
                    Some recordings:
                    https://soundcloud.com/sssteeve/sets...e-blue-guitar/
                    .

                    Comment

                    Working...
                    X