EVGA

VMware && Linux bigadv folding

Page: << < ..3132 Showing page 32 of 32
Author
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/02/16 20:54:31 (permalink)
That is good news. Glad to here the image works with ESXi! I've been thinking of adding in the PVSCSI support since ESXi supports this adapter.
 
Edit: Now that I think about it, PVSCSI is probably already built into the native image. I'll have to double check that.
post edited by linuxrouter - 2012/02/16 21:02:12

CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
Affiliate Code: OZJ-0TQ-41NJ
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/03/25 19:12:33 (permalink)
On the bonus calculator, I went through and marked those projects that are no longer on psummary as inactive in the database. There were actually 287 older projects no longer showing on psummary. This will help keep the main project list on the calculator a bit cleaner. There is also a link for viewing active/inactive projects towards the bottom which will load all projects including the inactive ones. This is mainly for historical purposes or for the case where a project is still active but not showing up on psummary.
post edited by linuxrouter - 2012/03/25 19:13:57
werty316
iCX Member
  • Total Posts : 359
  • Reward points : 0
  • Joined: 2005/04/25 10:33:00
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/04/01 16:15:34 (permalink)
I've been folding strickly in Windows ever since I started to fold but I wanted to give folding on Linux a try and this guide is freaking sweet! THX!
post edited by werty316 - 2012/04/01 16:26:21

 
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/04/01 18:12:48 (permalink)
Glad you like it. Thanks. :)
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/26 06:01:31 (permalink)
I'm using the virtualbox image, booting i7 with avx , but the configuration won't let me select/save the Sandy Bridge kernel option. What am I doing wrong? Or, should I just use the i7 kernel option?

 
 
arvidab
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2012/05/13 13:18:54
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/05/26 09:08:53 (permalink)
First off, I love this build, thanks LR! Have used it both natively and in a VM, recommending and helping friends to get this running properly.

Though I, and a few more people, have a problem running this natively with a SB. Running it inside a VM (I've only used VMWare) it gets expected results compared to native Win client and native Linux install of, say, Ubuntu.

But running this natively the performance takes a real dive. PPD is almost half on the few SMP units I tried with my 2500K, TPF went up from 3:13 to ~4:50 on that particular unit, IIRC. I have tried all the kernel options, but no seems to solve the issue.

Other people have came to similar conclusions.

Is there anyone here that have similar experience running this natively on a SB CPU? Or a suggestion I can try, I'd be very grateful?

@iDreadnaught: The -avx kernel is what you would want for a 2600K. Does it not stick if you choose in the web config?
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/26 11:22:04 (permalink)
Hmmmm, just rebooted and it did load the -avx kernel automatically, so I guess it's all good. The configuration just didn't keep the "Sandy Bridge" option selected as it does with the i7 option, but looks like it functions as it's supposed to.
 
So far the PPD is more than native Windows. I'm surprised that a VBox Nix guest under a Win7 host performs as well as it does, wish I'd read this thread earlier. Haven't gone through all 32 pages of this thread yet, but hoping I can optimize it further.
 
Sorry, haven't run the 2600K on native nix, but that does seem strange to me.

 
 
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/05/26 12:18:57 (permalink)
iDreadnaught

Hmmmm, just rebooted and it did load the -avx kernel automatically, so I guess it's all good. The configuration just didn't keep the "Sandy Bridge" option selected as it does with the i7 option, but looks like it functions as it's supposed to.

So far the PPD is more than native Windows. I'm surprised that a VBox Nix guest under a Win7 host performs as well as it does, wish I'd read this thread earlier. Haven't gone through all 32 pages of this thread yet, but hoping I can optimize it further.

Sorry, haven't run the 2600K on native nix, but that does seem strange to me.

 
You are right, this is a bug where the option is not auto-selected after previously selected. Thanks for me letting know. I will get this fixed.
 
I think the reason the reason for the performance difference is that at this time the Linux core application is more optimized than the Windows core application. In addition, the more recent versions of Virtualbox have near native performance. The performance difference of Linux in Virtualbox compared to running Linux as a native OS should be in the range of 2-5%.
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/26 17:39:51 (permalink)
Just finished a P6099 and seems it doesn't want to upload. Any ideas? Or problems on Stanford's end?
 
Could it be related to the VBox network configuration? But, it was able to d/l the unit in the first place, and pings google fine, so I don't know.
 
 [00:33:31] Completed 500000 out of 500000 steps  (100%)  
[00:33:32] DynamicWrapper: Finished Work Unit: sleep=10000
[00:33:42]
[00:33:42] Finished Work Unit:
[00:33:42] - Reading up to 12102336 from "work/wudata_01.trr": Read 12102336
[00:33:42] trr file hash check passed.
[00:33:42] edr file hash check passed.
[00:33:42] logfile size: 65039
[00:33:42] Leaving Run
[00:33:47] - Writing 12201051 bytes of core data to disk...
[00:33:48] Done: 12200539 -> 11296328 (compressed to 92.5 percent)
[00:33:48] ... Done.
[00:33:48] - Shutting down core
[00:33:48]
[00:33:48] Folding@home Core Shutdown: FINISHED_UNIT
[00:33:48] CoreStatus = 64 (100)
[00:33:48] Sending work to server
[00:33:48] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:33:48] + Attempting to send results [May 27 00:33:48 UTC]
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.231.202:8080)
[00:33:48] + Retrying using alternative port
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.231.202:80)
[00:33:48] - Error: Could not transmit unit 01 (completed May 27) to work server.
[00:33:48] Keeping unit 01 in queue.
[00:33:48] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:33:48] + Attempting to send results [May 27 00:33:48 UTC]
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.231.202:8080)
[00:33:48] + Retrying using alternative port
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.231.202:80)
[00:33:48] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:33:48] + Attempting to send results [May 27 00:33:48 UTC]
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.199.97:8080)
[00:33:48] + Retrying using alternative port
[00:33:48] - Couldn't send HTTP request to server
[00:33:48] + Could not connect to Work Server (results)
[00:33:48] (128.143.199.97:80)
[00:33:48] Could not transmit unit 01 to Collection server; keeping in queue.
[00:34:18] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:34:18] + Attempting to send results [May 27 00:34:18 UTC]
[00:34:18] - Couldn't send HTTP request to server
[00:34:18] + Could not connect to Work Server (results)
[00:34:18] (128.143.231.202:8080)
[00:34:18] + Retrying using alternative port
[00:34:18] - Couldn't send HTTP request to server
[00:34:18] + Could not connect to Work Server (results)
[00:34:18] (128.143.231.202:80)
[00:34:18] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:34:18] + Attempting to send results [May 27 00:34:18 UTC]
[00:34:18] - Couldn't send HTTP request to server
[00:34:18] + Could not connect to Work Server (results)
[00:34:18] (128.143.199.97:8080)
[00:34:18] + Retrying using alternative port
[00:34:18] - Couldn't send HTTP request to server
[00:34:18] + Could not connect to Work Server (results)
[00:34:18] (128.143.199.97:80)
[00:34:18] Could not transmit unit 01 to Collection server; keeping in queue.
[00:34:48] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:34:48] + Attempting to send results [May 27 00:34:48 UTC]
[00:34:48] - Couldn't send HTTP request to server
[00:34:48] + Could not connect to Work Server (results)
[00:34:48] (128.143.231.202:8080)
[00:34:48] + Retrying using alternative port
[00:34:48] - Couldn't send HTTP request to server
[00:34:48] + Could not connect to Work Server (results)
[00:34:48] (128.143.231.202:80)
[00:34:48] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:34:48] + Attempting to send results [May 27 00:34:48 UTC]
[00:34:48] - Couldn't send HTTP request to server
[00:34:48] + Could not connect to Work Server (results)
[00:34:48] (128.143.199.97:8080)
[00:34:48] + Retrying using alternative port
[00:34:48] - Couldn't send HTTP request to server
[00:34:48] + Could not connect to Work Server (results)
[00:34:48] (128.143.199.97:80)
[00:34:48] Could not transmit unit 01 to Collection server; keeping in queue.
[00:35:18] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:35:18] + Attempting to send results [May 27 00:35:18 UTC]
[00:35:18] - Couldn't send HTTP request to server
[00:35:18] + Could not connect to Work Server (results)
[00:35:18] (128.143.231.202:8080)
[00:35:18] + Retrying using alternative port
[00:35:18] - Couldn't send HTTP request to server
[00:35:18] + Could not connect to Work Server (results)
[00:35:18] (128.143.231.202:80)
[00:35:18] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:35:18] + Attempting to send results [May 27 00:35:18 UTC]
[00:35:18] - Couldn't send HTTP request to server
[00:35:18] + Could not connect to Work Server (results)
[00:35:18] (128.143.199.97:8080)
[00:35:18] + Retrying using alternative port
[00:35:18] - Couldn't send HTTP request to server
[00:35:18] + Could not connect to Work Server (results)
[00:35:18] (128.143.199.97:80)
[00:35:18] Could not transmit unit 01 to Collection server; keeping in queue.
[00:35:48] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:35:48] + Attempting to send results [May 27 00:35:48 UTC]
[00:35:48] - Couldn't send HTTP request to server
[00:35:48] + Could not connect to Work Server (results)
[00:35:48] (128.143.231.202:8080)
[00:35:48] + Retrying using alternative port
[00:35:48] - Couldn't send HTTP request to server
[00:35:48] + Could not connect to Work Server (results)
[00:35:48] (128.143.231.202:80)
[00:35:48] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:35:48] + Attempting to send results [May 27 00:35:48 UTC]
[00:35:48] - Couldn't send HTTP request to server
[00:35:48] + Could not connect to Work Server (results)
[00:35:48] (128.143.199.97:8080)
[00:35:48] + Retrying using alternative port
[00:35:48] - Couldn't send HTTP request to server
[00:35:48] + Could not connect to Work Server (results)
[00:35:48] (128.143.199.97:80)
[00:35:49] Could not transmit unit 01 to Collection server; keeping in queue.
[00:36:19] Project: 6099 (Run 4, Clone 22, Gen 173)


[00:36:19] + Attempting to send results [May 27 00:36:19 UTC]
[00:36:19] - Couldn't send HTTP request to server
[00:36:19] + Could not connect to Work Server (results)
[00:36:19] (128.143.231.202:8080)
[00:36:19] + Retrying using alternative port
[00:36:19] - Couldn't send HTTP request to server
[00:36:19] + Could not connect to Work Server (results)
[00:36:19] (128.143.231.202:80)
[00:36:19] - Error: Could not transmit unit 01 (completed May 27) to work server.


[00:36:19] + Attempting to send results [May 27 00:36:19 UTC]
[00:36:19] - Couldn't send HTTP request to server
[00:36:19] + Could not connect to Work Server (results)
[00:36:19] (128.143.199.97:8080)
[00:36:19] + Retrying using alternative port
[00:36:19] - Couldn't send HTTP request to server
[00:36:19] + Could not connect to Work Server (results)
[00:36:19] (128.143.199.97:80)
[00:36:19] Could not transmit unit 01 to Collection server; keeping in queue.
[00:36:49] Project: 6099 (Run 4, Clone 22, Gen 173)
[00:36:49] - Error: Could not get length of results file work/wuresults_01.dat
[00:36:49] - Error: Could not read unit 01 file. Removing from queue.
[00:37:19] + -oneunit flag given and have now finished a unit. Exiting.- Preparing to get new work unit...
[00:37:19] Cleaning up work directory

Folding@Home Client Shutdown.

post edited by iDreadnaught - 2012/05/26 17:50:19

 
 
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/05/26 17:49:32 (permalink)
Possibly. For some reason, the folding client does not have network connectivity to transmit the work unit to the server over port 8080 or 80. It could be a server issue or a firewall related issue somewhere on the network that is preventing access to those ports. Server issues are not too uncommon. I have seen server related issues quite a few times before.
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/26 18:08:20 (permalink)
I'll try it again in a few, luckily I made a backup at 98%

 
 
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/26 23:31:25 (permalink)
Well unfortunately, just finished a different WU with the same results as the first one, it won't send the results back. This time  P7142.
 
 [06:01:01] Completed 500000 out of 500000 steps  (100%) 
[06:01:02] DynamicWrapper: Finished Work Unit: sleep=10000
[06:01:12]
[06:01:12] Finished Work Unit:
[06:01:12] - Reading up to 3709584 from "work/wudata_02.trr": Read 3709584
[06:01:12] trr file hash check passed.
[06:01:12] edr file hash check passed.
[06:01:12] logfile size: 62556
[06:01:12] Leaving Run
[06:01:14] - Writing 3808100 bytes of core data to disk...
[06:01:15] Done: 3807588 -> 3524930 (compressed to 92.5 percent)
[06:01:15] ... Done.
[06:01:15] - Shutting down core
[06:01:15]
[06:01:15] Folding@home Core Shutdown: FINISHED_UNIT
[06:01:15] CoreStatus = 64 (100)
[06:01:15] Sending work to server
[06:01:15] Project: 7142 (Run 0, Clone 71, Gen 327)


[06:01:15] + Attempting to send results [May 27 06:01:15 UTC]
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (128.143.199.96:8080)
[06:01:15] + Retrying using alternative port
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (128.143.199.96:80)
[06:01:15] - Error: Could not transmit unit 02 (completed May 27) to work server.
[06:01:15] Keeping unit 02 in queue.
[06:01:15] Project: 7142 (Run 0, Clone 71, Gen 327)


[06:01:15] + Attempting to send results [May 27 06:01:15 UTC]
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (128.143.199.96:8080)
[06:01:15] + Retrying using alternative port
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (128.143.199.96:80)
[06:01:15] - Error: Could not transmit unit 02 (completed May 27) to work server.


[06:01:15] + Attempting to send results [May 27 06:01:15 UTC]
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (130.237.165.141:8080)
[06:01:15] + Retrying using alternative port
[06:01:15] - Couldn't send HTTP request to server
[06:01:15] + Could not connect to Work Server (results)
[06:01:15] (130.237.165.141:80)
[06:01:15] Could not transmit unit 02 to Collection server; keeping in queue.
[06:01:45] Project: 7142 (Run 0, Clone 71, Gen 327)


[06:01:45] + Attempting to send results [May 27 06:01:45 UTC]
[06:01:45] - Couldn't send HTTP request to server
[06:01:45] + Could not connect to Work Server (results)
[06:01:45] (128.143.199.96:8080)
[06:01:45] + Retrying using alternative port
[06:01:45] - Couldn't send HTTP request to server
[06:01:45] + Could not connect to Work Server (results)
[06:01:45] (128.143.199.96:80)
[06:01:45] - Error: Could not transmit unit 02 (completed May 27) to work server.


[06:01:45] + Attempting to send results [May 27 06:01:45 UTC]
[06:01:45] - Couldn't send HTTP request to server
[06:01:45] + Could not connect to Work Server (results)
[06:01:45] (130.237.165.141:8080)
[06:01:45] + Retrying using alternative port
[06:01:45] - Couldn't send HTTP request to server
[06:01:45] + Could not connect to Work Server (results)
[06:01:45] (130.237.165.141:80)
[06:01:45] Could not transmit unit 02 to Collection server; keeping in queue.
[06:02:15] Project: 7142 (Run 0, Clone 71, Gen 327)


[06:02:15] + Attempting to send results [May 27 06:02:15 UTC]
[06:02:15] - Couldn't send HTTP request to server
[06:02:15] + Could not connect to Work Server (results)
[06:02:15] (128.143.199.96:8080)
[06:02:15] + Retrying using alternative port
[06:02:15] - Couldn't send HTTP request to server
[06:02:15] + Could not connect to Work Server (results)
[06:02:15] (128.143.199.96:80)
[06:02:15] - Error: Could not transmit unit 02 (completed May 27) to work server.


[06:02:15] + Attempting to send results [May 27 06:02:15 UTC]
[06:02:15] - Couldn't send HTTP request to server
[06:02:15] + Could not connect to Work Server (results)
[06:02:15] (130.237.165.141:8080)
[06:02:15] + Retrying using alternative port
[06:02:15] - Couldn't send HTTP request to server
[06:02:15] + Could not connect to Work Server (results)
[06:02:15] (130.237.165.141:80)
[06:02:15] Could not transmit unit 02 to Collection server; keeping in queue.
[06:02:45] Project: 7142 (Run 0, Clone 71, Gen 327)


[06:02:45] + Attempting to send results [May 27 06:02:45 UTC]
[06:02:45] - Couldn't send HTTP request to server
[06:02:45] + Could not connect to Work Server (results)
[06:02:45] (128.143.199.96:8080)
[06:02:45] + Retrying using alternative port
[06:02:45] - Couldn't send HTTP request to server
[06:02:45] + Could not connect to Work Server (results)
[06:02:45] (128.143.199.96:80)
[06:02:45] - Error: Could not transmit unit 02 (completed May 27) to work server.


[06:02:45] + Attempting to send results [May 27 06:02:45 UTC]
[06:02:45] - Couldn't send HTTP request to server
[06:02:45] + Could not connect to Work Server (results)
[06:02:45] (130.237.165.141:8080)
[06:02:45] + Retrying using alternative port
[06:02:45] - Couldn't send HTTP request to server
[06:02:45] + Could not connect to Work Server (results)
[06:02:45] (130.237.165.141:80)
[06:02:45] Could not transmit unit 02 to Collection server; keeping in queue.
[06:03:15] Project: 7142 (Run 0, Clone 71, Gen 327)
[06:03:15] - Error: Could not get length of results file work/wuresults_02.dat
[06:03:15] - Error: Could not read unit 02 file. Removing from queue.
[06:03:45] + -oneunit flag given and have now finished a unit. Exiting.- Preparing to get new work unit...
[06:03:45] Cleaning up work directory

Folding@Home Client Shutdown

 
-send all does this,
 Launch directory: /usr/local/fah 
Executable: ./fah6
Arguments: -send all

[06:19:20] - Ask before connecting: No
[06:19:20] - Proxy: localhost:8080
[06:19:20] - User name: iDreadnaught (Team 111065)
[06:19:20] - User ID: 788E6E6F76BE08C2
[06:19:20] - Machine ID: 1
[06:19:20]
[06:19:21] Loaded queue successfully.
[06:19:21] Attempting to return result(s) to server...

Folding@Home Client Shutdown.

 
But then -queueinfo still shows shows
 Launch directory: /usr/local/fah 
Executable: ./fah6
Arguments: -queueinfo

[06:20:52] - Ask before connecting: No
[06:20:52] - Proxy: localhost:8080
[06:20:52] - User name: iDreadnaught (Team 111065)
[06:20:52] - User ID: 788E6E6F76BE08C2
[06:20:52] - Machine ID: 1
[06:20:52]
[06:20:52] Loaded queue successfully.
[06:20:52] Printing Queue Information
Current Queue:
Slot 03 Empty/Deleted

Slot 04 Empty/Deleted

Slot 05 Empty/Deleted

Slot 06 Empty/Deleted

Slot 07 Empty/Deleted

Slot 08 Empty/Deleted

Slot 09 Empty/Deleted

Slot 00 Empty/Deleted

Slot 01 Empty/Deleted
Project: 6099 (Run 4, Clone 22, Gen 173), Core: a3
Work server: 128.143.231.202:8080
Collection server: 128.143.199.97
Download date: May 25 22:13:34
Finished date: May 27 01:10:55

Slot 02 *Empty/Deleted
Project: 7142 (Run 0, Clone 71, Gen 327), Core: a3
Work server: 128.143.199.96:8080
Collection server: 130.237.165.141
Download date: May 27 01:11:00
Finished date: May 27 06:01:15

 
I'm lost, don't really want to have it running when it can't send results back for some reason. I don't think it's related to my network or firewall because my v7 client on the same host is able to send my GPU WU's back. Any help is appreciated, I'll browse around to see if I can find some more info in the mean time.

 
 
arvidab
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2012/05/13 13:18:54
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/05/27 07:10:52 (permalink)
Have you tried setting the network adapter for the VM to bridged? The default for Vbox is NAT and sometime can cause problems, bridged always works for me. Worth a try at least.
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/27 08:35:36 (permalink)
Yes I have it set to bridged because NAT mode assigned an IP out of range of my network and I couldn't setup FAHMon, nor ping it's IP unless I had it set in bridged mode.

 
 
iDreadnaught
Superclocked Member
  • Total Posts : 141
  • Reward points : 0
  • Joined: 2012/04/16 14:44:52
  • Location: Florida
  • Status: offline
  • Ribbons : 1
Re:VMware 3.0 && bigadv folding 2012/05/30 20:10:05 (permalink)
linuxrouter
 
It could be a server issue or a firewall related issue somewhere on the network that is preventing access to those ports.

 
Well, I feel like an idiot.  The problem was my PeerBlock software. The host OS folds all day completing handshakes and downloading/uploading WU's, which is why I overlooked it. But PeerBlock apparently allows the VM to download a WU, but would not allow it to upload it back to Stanford. I don't understand why but I won't argue with success, everything is running fine now.
 
~iDreadnaught

 
 
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/06/01 07:16:22 (permalink)
Glad to hear you found what was causing the upload failure. :)
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/07/05 15:00:10 (permalink)
Over the past few months I received E-mail's from users of my images to add a few things including kernel builds for additional architectures, slapt-get Slackware package management tool, and thekraken developed by Tear. As such, I have made a few updates to my Virtualbox image so far with the following.
 
+ Built kernel 3.4.4 for the following CPU types using GCC 4.7 and binutils 2.22:
    Intel: Core 2, Nehalem, Sandy Bridge, Ivy Bridge, Haswell AMD: Barcelona, Bulldozer, Piledriver
+ Added slapt-get 13.37 and dependencies and set to source off a slackware64-current mirror.
+ Updated Tear's Langouste and TheKraken tools to the latest versions available.
    Langouste: 0.15.7, TheKraken: 0.17-pre15
+ Installed VirtualBox Additions 4.1.18
 

I do not have all these different processor types so I cannot say for sure what if any difference building the kernel for specific architectures will help with folding. I also received a request to attempt to build the kernel with AMD's compiler for Bulldozer. I have not had a chance to look into that yet but it seems like an interesting project that I would like to look into.
 
Testing with Virtualbox 4.1.18, I am seeing near the same performance as running in native Linux.
 
P7506 - 3930K @ 4.25 GHz - smp 12
Virtualbox: 2:08
Native: 2:05
 
This new version of Virtualbox appears to have some delay with ACPI shutdown. If you see a delay on reboot or shutdown, I think it might have to do with this new version.

CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
Affiliate Code: OZJ-0TQ-41NJ
arvidab
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2012/05/13 13:18:54
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/07/05 15:11:28 (permalink)
Thanks for all the hard work, LR, really appreciated. I'd love to test out the new version native on my Sandy.
 
When you compare with native, I assume it's the same build as in the VM?
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/07/05 15:29:57 (permalink)
That was with the same build. I still have to  update my native installer which I hope to do soon. I will basically copy what is setup for VB but then build the kernels with the necessary SATA/SAS and Ethernet support to cover different motherboard types.

CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
Affiliate Code: OZJ-0TQ-41NJ
arvidab
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2012/05/13 13:18:54
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/07/05 18:28:30 (permalink)
Sounds great, will sure try it out.
arvidab
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2012/05/13 13:18:54
  • Status: offline
  • Ribbons : 0
Re:VMware 3.0 && bigadv folding 2012/07/17 14:31:39 (permalink)
Double post I know...
 
Would the Barcelona option be the choice for any K10 (Phenom II, Opteron 61xx etc) based CPU? I'd guess so, but I'm unable to test this right now.
post edited by arvidab - 2012/07/17 14:44:30
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/07/19 23:20:48 (permalink)
arvidab

Double post I know...

Would the Barcelona option be the choice for any K10 (Phenom II, Opteron 61xx etc) based CPU? I'd guess so, but I'm unable to test this right now.

 
Barcelona includes family 10h or K10 so that kernel should work for your processors in this family.
post edited by linuxrouter - 2012/07/19 23:21:53

CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
Affiliate Code: OZJ-0TQ-41NJ
TheWolf
CLASSIFIED Member
  • Total Posts : 3800
  • Reward points : 0
  • Joined: 2007/11/14 16:05:23
  • Location: Moss Point, Ms
  • Status: offline
  • Ribbons : 9
Re:VMware 3.0 && bigadv folding 2012/08/08 16:39:33 (permalink)
Here is where I am. I'm wanting to setup folding on a SR2 with CPU's E5645 & E5620 20 cores.
 
What would be best to use for this in the OS of Windows 7 64bit Virtualbox 4.1.18 or
something else & what image should I use?
Links would be a great help. Some of the links on 1st page don't seem to work for me at this time.
Been a while since I done a VM so I may need some help a long the way.
Thanks
post edited by TheWolf - 2012/08/08 16:40:57

EVGA Affiliate Code ZHKWRJB9D4 My HeatWare 
 
linuxrouter
Omnipotent Enthusiast
  • Total Posts : 8043
  • Reward points : 0
  • Joined: 2008/02/28 14:47:45
  • Status: offline
  • Ribbons : 104
Re:VMware 3.0 && bigadv folding 2012/08/10 21:56:27 (permalink)
TheWolf
Here is where I am. I'm wanting to setup folding on a SR2 with CPU's E5645 & E5620 20 cores.

What would be best to use for this in the OS of Windows 7 64bit Virtualbox 4.1.18 or
something else & what image should I use?
Links would be a great help. Some of the links on 1st page don't seem to work for me at this time.
Been a while since I done a VM so I may need some help a long the way.
Thanks

 
If you plan to run GPU folding or need Windows OS for running other apps, the Virtualbox image should work good and from my testing is close to performance of folding in standalone Linux.
 
Here is a link to my latest Virtualbox image:
 
Virtualbox 1.5.0 image
 
The Internet connection to my server is unfortunately not the best so there may occasionally be an issue accessing the site.
post edited by linuxrouter - 2012/08/10 21:59:11

CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
Affiliate Code: OZJ-0TQ-41NJ
TheWolf
CLASSIFIED Member
  • Total Posts : 3800
  • Reward points : 0
  • Joined: 2007/11/14 16:05:23
  • Location: Moss Point, Ms
  • Status: offline
  • Ribbons : 9
Re:VMware 3.0 && bigadv folding 2012/08/12 21:37:42 (permalink)
linuxrouter

TheWolf
Here is where I am. I'm wanting to setup folding on a SR2 with CPU's E5645 & E5620 20 cores.

What would be best to use for this in the OS of Windows 7 64bit Virtualbox 4.1.18 or
something else & what image should I use?
Links would be a great help. Some of the links on 1st page don't seem to work for me at this time.
Been a while since I done a VM so I may need some help a long the way.
Thanks


If you plan to run GPU folding or need Windows OS for running other apps, the Virtualbox image should work good and from my testing is close to performance of folding in standalone Linux.

Here is a link to my latest Virtualbox image:

Virtualbox 1.5.0 image

The Internet connection to my server is unfortunately not the best so there may occasionally be an issue accessing the site.

First off: Thanks for your wonderful thread and hard work.
I have it pretty much worked out now, but you can look at my progress here and make suggestion if you see anything I could do to make it any better.
Thanks again great work.

EVGA Affiliate Code ZHKWRJB9D4 My HeatWare 
 
Page: << < ..3132 Showing page 32 of 32
Jump to:
  • Back to Mobile