column-count was proposed in January 2001, candidate in December 2009. It’s supported in WebKit and Mozilla via extensions and Opera directly, but it’s not in IE9. Y U no support columns IE9? That’s OK, we can work around this with columnizer:
if($.browser.msie&& $.browser.version<10){// am I a hopeless romantic for assuming that IE10 will support it?
$('.multicolumn').columnize({
width:600,
columns:3});}
if ($.browser.msie && $.browser.version < 10) { // am I a hopeless romantic for assuming that IE10 will support it? $('.multicolumn').columnize({ width: 600, columns: 3 }); }
/* Support for Webkit, Mozilla, Opera */
div#multicolumn,.multicolumn{
-moz-column-count:3;
-moz-column-gap:20px;
-webkit-column-count:3;
-webkit-column-gap:20px;column-count:3;column-gap:20px;width:600px;}
/* Support for Webkit, Mozilla, Opera */ div#multicolumn, .multicolumn { -moz-column-count: 3; -moz-column-gap: 20px; -webkit-column-count: 3; -webkit-column-gap: 20px; column-count: 3; column-gap: 20px; width:600px; }
#region don't process the same address twiceif(newEmail.EmailHash==0) newEmail.EmailHash= newEmail.Email.GetSHA1Hash();if(emailsToAdd.Contains(newEmail.EmailHash)){returnfalse;}
emailsToAdd.Add(newEmail.EmailHash);#endregion
#region don't process the same address twice if (newEmail.EmailHash == 0) newEmail.EmailHash = newEmail.Email.GetSHA1Hash(); if (emailsToAdd.Contains(newEmail.EmailHash)) { return false; } emailsToAdd.Add(newEmail.EmailHash); #endregion
For this implementation, I am returning the hash as a 32-bit integer because I am using it for caching, not security. It’s a little faster to use an int32 index than the default 160 hex digest. If avoiding collisions is important (only 4,294,967,295 values in an int32), remove the BitConverter.ToInt32 call and return a string.
It’s for C# 4.0 because it’s an extension method.
Here is the hash class. Call with STRING.GetSHA1Hash().
Suppose you have a page which requires you to load content inline in response to some action. In the below example, you have a filter and a “preview” button which needs to show some sample data.
You could have the button do a post and have the MVC handler parse the form fields and render the view with the data you want to show. But, you may have a complex page with many tabs and want to optimize form performance. In such a case, you want to send just the form data for the current tab and inject the response into the current page. Here is how to do that:
Here is the HTML of the relevant section of the “Export” tab:
<fieldset><div class="btn green preview"><a href="#">Preview</a></div><div class="clear"></div><span> </span></fieldset><div class="btn orange submit"><a href="#">Export to CSV</a></div><div class="clear"></div>
Note that I am overriding the default Layout because I don’t want to show the normal header/footer. _Blank.cshtml contains solely “@RenderBody()”
The handler for the /Preview target is:
[HttpPost]public ActionResult Preview(ExportFilter filter){var export =new EmailExporter(filter);
List emailList = export.Process();
ViewBag.Message=string.Format("{0:##,#} records to be exported", emailList.Count);return View(filter);}
[HttpPost] public ActionResult Preview(ExportFilter filter) { var export = new EmailExporter(filter); List emailList = export.Process(); ViewBag.Message = string.Format("{0:##,#} records to be exported", emailList.Count); return View(filter); }
Now when I click “preview”, I get a momentary “loading” screen and then the rendered view.
This is a quick visual guide to setting up continuous integration with TFS 2010.
TFS is used to create build definitions for specific trigger criteria. After the builds are copied to the drop folder, MS Build calls the MSDeploy publishing service to update the target website.
1: Configure a new TFS build controller and connect it to your TFS team project collection:
2: Add a new build definition:
3: Choose continuous integration build triggers. I setup up two triggers: one for a rolling build, and one to run every day at 3 AM.
4: Specify the build controller and staging location (build output folder) for the builds
5: Now you need to copy the builds from the build folder to the web server. We can do this using MSDeploy.
(TODO: I need to figure out how to use integrated authentication so I don’t have to save the credentials in plaintext.)
Now I need to configure the website on the target server:
6: Setup the integration web server. Use the same application path as the DeployIISAppPath above.
7: Install the web deployment tool 2.0 from http://www.iis.net/download/webdeploy
When you install the tool, make sure the Remote Agent Service is included in the install.
8: The Remote Agent Service is manual by default. Change it to automatic.
9: That should be it. Now you should be able to “Queue a New Build” or (depending on your build trigger) check in some code and have your website updated!
You should be able to see your builds in TFS by double-clicking on the build definition. Any successful or failed builds will show up here:
Closing Notes:
The TFS server, build server, build drop location, and web server can all be separate machines. Using web deployment, we can locate the web server anywhere on the Internet, so we can use this method for one-click deployment to production as well. However, we probably don’t want to save the credentials for the production server in the build definition to avoid accidental one-click deployment to live servers!
Update:
If you just want to deploy the build output to a folder or file share, you can modify the build process template. Add a CopyDirectory to the step after RunOnAgent. (Source)
Over the last few weeks, I’ve experimented with image optimization tools. Using these tools, I have rapidly eliminated gigabytes of image data from thousands of images without any quality loss. Over time, this should translate to many terabytes of bandwidth savings.
Because these tools can be run in batch mode on thousands of images at a time, they are useful for optimizing large, existing image libraries. They are lossless and designed for bulk mode, which means you can safely run them without any loss in image quality. But be careful: test on small samples first and learn their specialties and quirks.
An alternative to running them locally is to use Yahoo! Smush.it, which is an online service that “uses optimization techniques specific to image format to [losslessly] remove unnecessary bytes from image files” using the same tools. The best way to run Smush.it is via the Yahoo! YSlow for Firebug, an add-on for the Firebug add-on for Firefox. (By the way, Smush.it renames GIF images to .gif.png when it shrinks them. I wrote a console app to rename them back to .gif. Browsers only look at the image header to identify images, so it’s safe to serve up PNG images with a .gif extension.)
Mac:
For OS X, all you need is ImageOptim, which optimizes JPEG, PNG, and GIF auto-magically. Seriously awesome tool. (Free.)
For lossy optimization, JPEGmini is amazing. It uses knowledge of human visual perception to shrink JPEG’s up to 5X without visible quality loss. (Semi-free.)
Windows:
RIOT (Radical Image Optimization Tool):
Though it has a batch mode, this is the best tool for optimizing single images, whether they are JPG, PNG, or GIF. I use RIOT to save every image I work on as well as to reduce the size of existing images that are too large. You can re-compress PNG and GIF images losslessly, but for JPG you want to save from the original file.
RIOT is available as a standalone version as well as a plugin for several image editors such as the excellent IrfanView.
The JPEG Reducer:
Run this tool in bulk on all your JPEG images to save ~10% of the file size. This is a GUI front end for jpegtran, which optimizes JPEG images by doing removing metadata and other non-display data from JPEG images. Because it is lossless, it is safe to run on all your image. It will ignore any files you add which are not really JPEG images.
PNG Gauntlet:
This tool is a front end for PNGOUT which will losslessly reduce the size of PNG images. Warning: if you add GIF or JPEG images to it, it will create PNG versions of those images. Sometimes you want to do this, but if not, don’t add images to the queue.
Let me know if you have other tools or ideas for image optimization.
n. 1: automatic, but with an element of magic. 2: too complex to understand and/or explain