Results 1 to 10 of 61

Thread: Catalyst in snow leopard

Hybrid View

  1. #1
    Quote Originally Posted by SourceChild View Post
    I am running 10.6 on my hot backup machines. They are all outperforming the 10.5.8 machines.

    I have been able to run 3 layers 1920x1080x29.97 having Audio with minimal frame drop.

    8 core 2.66 processors
    Mtron 7500 SSDs
    8800 Graphics
    Phoenix 1 lane 2 HD input

    I can run 2 layers of the same and 2 layers of 720x480@29.97 at same time with no frame loss.
    how many more standard definition files - without audio can you get?

  2. #2
    I'm on tour and testing during down time. My last leg I didn't get as much testing time as I wanted.

    I also have a slow SSD which I need to zero and defrag content on. With 100Gb of show content, that is going to take a while. Once I do that I can run tests.

    I'll report more on results after the weekend.

    Oh and Apple's brilliant idea of changing the Gamma was a bit annoying to correct.
    SourceChild
    TODD SCRUTCHFIELD

    ...if it ain't broke...
    gimme 5 and then don't act surprised

  3. #3
    Quote Originally Posted by SourceChild View Post
    I'm on tour and testing during down time. My last leg I didn't get as much testing time as I wanted.

    I also have a slow SSD which I need to zero and defrag content on. With 100Gb of show content, that is going to take a while. Once I do that I can run tests.

    I'll report more on results after the weekend.

    Oh and Apple's brilliant idea of changing the Gamma was a bit annoying to correct.
    defrag ssd?

    isnt the access to every bit of data is the same time? irrespective of read position.

  4. #4
    Do not Defrag SSD. It will only wear down the flash memory!

    http://www.tomshardware.co.uk/forum/...6283_14_0.html

    It's not worth it...

  5. #5
    Quote Originally Posted by Semillion View Post
    Do not Defrag SSD. It will only wear down the flash memory!

    http://www.tomshardware.co.uk/forum/...6283_14_0.html

    It's not worth it...

    i would agree with this -

    but maybe todd has heard something else?

    it says here :
    OCZ also warns on their info page that “Solid State Drives DO NOT require defragmentation. It may decrease the lifespan of the drive.”

    This is nothing to actually be overly concerned about as the theoretical re-write limits for each sector in a Solid State Drive are going to outlive the use of the drive. It is just that defragmenting (although not necessary) creates an excessive amount of write cycles on any drive. Solid State Drives are designed so that data is written evenly to all sectors – this is what the industry refers to as “Wear Leveling.” So feel free to fill your drive full of random data just so you can see how fast it defrags for kicks; you will not harm anything, but do not do it on a regular basis unless you want to lower the MTBF of the drive down to mechanical HDD standards.
    http://www.tomshardware.co.uk/OCZ-AP...ews-30096.html

  6. #6
    Well to get back on topic my first test results on snow leapord:
    late 2008 8 core mac pro
    8800 gt
    ocz ssd
    cat 167

    using 720p files.. all around 150mb in size snow leopard allows for 1-2 more layers to be run... total of 7

    I would say that so far it is a verifiable improvement... will test LFG cards next week if nobody beats me to it.

  7. Thanks Anthony
    Nev Bull
    Pixels Plus Limited
    Digital Video Services

    Catalyst Software - Upgrades - Server Hardware - Accessories - Training - Support

    t: +44 (0)1494 858151
    skype: nevillebull
    e: nev@pixelsplus.co.uk
    w: www.pixelsplus.co.uk

  8. #8
    Quote Originally Posted by ajmaudio View Post
    Well to get back on topic my first test results on snow leapord:
    late 2008 8 core mac pro
    8800 gt
    ocz ssd
    cat 167

    using 720p files.. all around 150mb in size snow leopard allows for 1-2 more layers to be run... total of 7

    I would say that so far it is a verifiable improvement... will test LFG cards next week if nobody beats me to it.
    and standard definition?

  9. #9
    In brief I will agree. Defragmenting a content SSD drive is not a good idea. However, here is a better explanation:

    Quote Originally Posted by SourceChild View Post
    "a slow SSD which I need to zero"
    I run the Disk Utility to Zero the Disk.

    Quote Originally Posted by SourceChild View Post
    "and defrag content"
    I Defrag the content on a separate HD and copy back to the SSD


    Let me clarify how I will go about this. First, I have all the content copied and verified to backup disk. On the backup disk I run a defragmenter and series of tools so that the content is consolidated and orderly on my backup disk.

    One the content is backed up, I zero the data on the SSD using one pass. Then I copy the content back to the SSD.

    In this method, I only conduct two write cycles to the SSD.

    The reason defragging an SSD is a bad ideas is because the defragging cycle can literally write hundreds of times to a specific zone or specific sectors on a disk during the cleaning cycle.

    Another paradigm to consider is the use of the SSD in Catalyst. Ideally, we are not constantly read/writing like other industries. We (hopefully) write once or very few times and then simply read back repeatedly.

    Now for a moment consider my specific application. I have a show where the total combined content creation added up to about 800GB of show files. Of course we threw out about 80% of that but we didn't know what was staying or going until the show was on the road.

    Now if I had been able to, I would have justified an xServe RAID during preproduction which would have run all the servers. Since I didn't get the budget I needed for content, I had to settle for writing and deleting to the SSDs repeatedly.

    Now I mention xServeRAID here for a reason. The show uses multiple servers and a single fast reliable repository for content means copying one time. As it turns out, the workflow I was subject to required tremendous amounts of wasted time waiting for files to copy.

    Anyone who's copied a large volume of data to an SSD knows exactly what I'm talking about.

    Had I used an xServeRAID, all the content creators on site could have been connected to a single repository through fibre and that repository being the same one I use to run the rehearsals. Instead, using the Gigabit network on copper meant that each time a file was updated it had to be copied successively from machine to machine with my content machine being an intermediary.

    Of course some of this is beside the point but it paints a picture of how the content can get so messy and require cleanup from disk utilities.

    I'm using 128Gb drives in all my machines, I really only have about 100Gb of safe limit before the disks get too full. (Following the 20% open space rule.)

    Having to change content, revise files, and add new ones requires a lot of copying and a lot of deleting. This means dozens of rewrites to and from each SSD.

    I have theoretically moved more than 2Tb of data across this disk for this show alone so the machines are do for a cleanup.

    By zeroing the disk and rewriting the content to it, I might shorten the life yes, but not really in comparison to using the disk as a constant read/write/rewrite device. This show has two years in front of it so it's not like it's something I'll be doing again soon. So zeroing one time so that all the disks do are reads for the next 120 shows is not a bad trade off.

    There are several other articles which talk about SSDs. Specifically that using MLS disks will cause adjacent memory locations to favor one another which causes the reader to misinterpret on first pass if the sectors have been rewritten repeatedly. By zeroing a disk, new data which writes will not be as easily compromised by adjacent cell degradation.

    For more info search SSD pros and cons keeping in mind that SLCs have a greater lifespan than MLCs.
    SourceChild
    TODD SCRUTCHFIELD

    ...if it ain't broke...
    gimme 5 and then don't act surprised

  10. #10
    write cycle wear is between 100,000 and millions of writes- and then the discs have redundant sections which they shift to

    well outside this- on any short term application usage...


    maybe if you read and write thousands of system files or use virtual memory - but not playing back movies or organising files...

    sata hard discs are on a level with this level of failure - and much more catastrophic-

    ssd's are totally different beasts from normal discs-
    strategies for their use are completely different- there is no 20% rule- or any sata hard disc rule applicable.

    ---

    xserve raids were a total pain in the ass- and so is the xsan software - administering them was a nightmare - and they never worked well enough to anywhere justify the cost - and they needed an extra machine to administer the thing. and you needed an IT person to set them up ....

    performance across multiple machines was not good enough- and overall performance on entire device was only able to do 150MB/s- about 1 or 2 internal ssds bandwidth-

    ---

    the time it takes to write files on multiple machines with the same content is not related to SSD - thats a file copying problem- it doesnt get faster with sata discs-


    Quote Originally Posted by SourceChild View Post
    In brief I will agree. Defragmenting a content SSD drive is not a good idea. However, here is a better explanation:


    I run the Disk Utility to Zero the Disk.


    I Defrag the content on a separate HD and copy back to the SSD


    Let me clarify how I will go about this. First, I have all the content copied and verified to backup disk. On the backup disk I run a defragmenter and series of tools so that the content is consolidated and orderly on my backup disk.

    One the content is backed up, I zero the data on the SSD using one pass. Then I copy the content back to the SSD.

    In this method, I only conduct two write cycles to the SSD.

    The reason defragging an SSD is a bad ideas is because the defragging cycle can literally write hundreds of times to a specific zone or specific sectors on a disk during the cleaning cycle.

    Another paradigm to consider is the use of the SSD in Catalyst. Ideally, we are not constantly read/writing like other industries. We (hopefully) write once or very few times and then simply read back repeatedly.

    Now for a moment consider my specific application. I have a show where the total combined content creation added up to about 800GB of show files. Of course we threw out about 80% of that but we didn't know what was staying or going until the show was on the road.

    Now if I had been able to, I would have justified an xServe RAID during preproduction which would have run all the servers. Since I didn't get the budget I needed for content, I had to settle for writing and deleting to the SSDs repeatedly.

    Now I mention xServeRAID here for a reason. The show uses multiple servers and a single fast reliable repository for content means copying one time. As it turns out, the workflow I was subject to required tremendous amounts of wasted time waiting for files to copy.

    Anyone who's copied a large volume of data to an SSD knows exactly what I'm talking about.

    Had I used an xServeRAID, all the content creators on site could have been connected to a single repository through fibre and that repository being the same one I use to run the rehearsals. Instead, using the Gigabit network on copper meant that each time a file was updated it had to be copied successively from machine to machine with my content machine being an intermediary.

    Of course some of this is beside the point but it paints a picture of how the content can get so messy and require cleanup from disk utilities.

    I'm using 128Gb drives in all my machines, I really only have about 100Gb of safe limit before the disks get too full. (Following the 20% open space rule.)

    Having to change content, revise files, and add new ones requires a lot of copying and a lot of deleting. This means dozens of rewrites to and from each SSD.

    I have theoretically moved more than 2Tb of data across this disk for this show alone so the machines are do for a cleanup.

    By zeroing the disk and rewriting the content to it, I might shorten the life yes, but not really in comparison to using the disk as a constant read/write/rewrite device. This show has two years in front of it so it's not like it's something I'll be doing again soon. So zeroing one time so that all the disks do are reads for the next 120 shows is not a bad trade off.

    There are several other articles which talk about SSDs. Specifically that using MLS disks will cause adjacent memory locations to favor one another which causes the reader to misinterpret on first pass if the sectors have been rewritten repeatedly. By zeroing a disk, new data which writes will not be as easily compromised by adjacent cell degradation.

    For more info search SSD pros and cons keeping in mind that SLCs have a greater lifespan than MLCs.

Similar Threads

  1. Vista Spyder Remote Access to Catalyst
    By Seantava in forum Catalyst Technical support
    Replies: 6
    Last Post: 26-03-2010, 11:51 AM
  2. External encoder input into Catalyst?
    By kmclights in forum Catalyst Technical support
    Replies: 3
    Last Post: 06-05-2009, 11:22 AM
  3. Booting Catalyst on Start up
    By johnboylightboy in forum Catalyst Technical support
    Replies: 5
    Last Post: 05-04-2008, 05:32 AM
  4. Coming up Leopard
    By rolfwenzel in forum Catalyst Technology questions
    Replies: 1
    Last Post: 25-08-2007, 12:45 PM
  5. XServe RAID and catalyst
    By samsc in forum Catalyst Technology questions
    Replies: 28
    Last Post: 21-01-2007, 09:08 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •