Image Slicing was Re: [oclug]for the mac / linux fans
bb at L8R.net
Tue Jan 14 01:46:24 EST 2003
On 14 Jan 2003 00:45:18 -0500
Shad Young <shad.young at sympatico.ca> wrote:
> On Mon, 2003-01-13 at 22:17, Brad Barnett wrote:
> > On Mon, 13 Jan 2003 21:43:49 -0500
> > Robert Echlin <rechlin at magma.ca> wrote:
> > > At 09:43 AM 1/13/03 -0500, Daniel Quinn wrote:
> > > >On January 13, 2003 09:25 am, Pat Gilliland wrote:
> > > > > Brad's discussion of image slicing make sense to me. Would you
> > > > > please elaborate.
> > > > >
> > >
> > >
> > >
> > >
> > > >1.
> > > >to the human eye, the page *does* load faster. if you're waiting
> > > >for a page to load, you're much more likely to leave if you're
> > > >waiting 30seconds for a single image to load. but if parts of that
> > > >image load in different areas, the user waits longer 'cause it
> > > >appears that the site is loading faster.
> > >
> > > Daniel Quinn made a number of other points that I find valid, but
> > > this one I was going to make myself, before he beat me to it.
> > >
> > > I also want to point out that any image of a Canadian flag that a
> > > 'real'
> > >
> > > Graphic Artist would use would be horribly changed by replacing the
> > > red ends with an expanded single pixel - all the subtle gradations
> > > of shadow
> > >
> > > and shape would be lost, and ends of the waving flag wouldn't be a
> > > rectangular image in the first place. - You may, if you wish, take
> > > this as also being a subtle dig at 'real' graphic artists, not just
> > > Brad.(grin)
> > Actually, the funny part is that I at no point in time advocated the
> > use of a single pixel image expanded to fill an area. I merely looked
> > at the possible savings of doing so, and how it is pointless in most
> > cases.
> > >
> > > Anyway, this is not something you do to 3K images, more like 30K or
> > > my informed is 1/100th as important.
> > Yes, as my tests showed, you can get very *minor* improvements in some
> > cases with 30k images. However, even they are minimalistic, and the
> > worth of doing so is questionable.
> Sigh. Have you ever heard of drop shadows? Custom lines or a myriad of
> other effects possible single pixel images? They could be done in a
> complete image too, I suppose, but without any dynamic control and
> significant increase in bandwidth. While bandwidth usages is reduced,
> memory usage of the fully rendered page is actually substantially larger
> with single pixel effects. Nothing comes for free.
> Using your example image of the flag, you can chop off the red, replace
> it with a series of repeating images, cut the total bandwidth usage 10
> fold because the single pixel images that are what? 800 bytes? are
> downloaded only once and then called from cache when the image is
> rebuilt. The page has more code, but it is substantially less than the
> reduction in *transfered* image size. See above for rebuild memory
The paragraph above clearly shows that you did not take the time to
actually read my previous posts, and look at the math I provided to you.
I even provided you with an example EXACTLY like the one you are referring
to above. An example where one identical image is loaded multiple times,
in order to make the left and right side of the flag. Somehow, however,
you didn't do the math. Somehow, even though I SPECIFICALLY stated that I
was only loading the "endbit" once from the server, not 36 times, you seem
to think that my calculations suggested they were. Somehow, even though
they didn't, and were in writing right before you. Laid out plainly for
you to see.
Let me show it to you again:
Let's look at it another way. Take the Canadian flag. It's actually
something that you may think would be perfect for this at first. Chop off
the red ends. Call them "CFEnds.jpg". Call the middle "CFMid.jpg". Now
suddenly, the size of your image is about 2/3rds of what it was before!
You can use CFEnds.jpg+CFMid.jpg+CFEnds.jpg, and suddenly you have the
flag, with less bandwidth, right? Of course, the slicing being referred
to above is more intense than that. The CFEnds.jpg would probably also be
sliced into 1 identical small rectangle, and put together it would form
the entire end, such as:
[endbit.jpg][endbit.jpg][endbit.jpg] <bigmiddlepart> <clone_of_left_side>
Wow! Suddenly you've saved tons upon tons of filesize, right! Let's look
at this in more detail! Instead of one image, you have one center part,
but the red ends have been chopped into 36 identical pieces, which only
needs to be fetched from the server once! A savings?
Take a look at this image of the Canadian flag:
-rw-r--r-- 1 bbarnett bbarnett 4178 Jan 12 23:02 311x150.jpg
Now, using XV, I will take it, and crop a piece of the red ends, equal to
1/18th of the size. This means that each end will represet the same as
above. Look however! Part a is the small end bit that will be repeated,
and part b is the middle "leaf" part of our flag.
-rw-r--r-- 1 bbarnett bbarnett 743 Jan 12 23:10 311x150_a.jpg
-rw-r--r-- 1 bbarnett bbarnett 3224 Jan 12 23:11 311x150_b.jpg
How do you find it difficult to understand that 3224+743= 3967, while the
original image was 4178 bytes? How do you find it difficult to
understand, that the actual GET statement (with full path) to the web
server, the extra IMG SRC and what not, actually makes the above spliced
image MORE costly for bandwidth? Repeating a red square 18 times per end,
for a total of 36 times, doesn't save anything. It costs you in terms of
bandwidth, file handles on the server, and cpu usage on the client.
Example. <IMG SRC="/images/file.jpg"> repeated 36 times is 28 bytes * 36
= 1008 bytes. That of course precludes other associated HTML flack that
will go into putting that image together. Let's look at the GET dialog on
For each image mozilla grab, we get:
GET /image/file.jpg HTTP/1.1..Host: l8r.net..User-Agent: Mozilla/5.0 (X11;
U ; Linux i686; en-US; rv:1.2.1) Gecko/20021210 Debian/1.2.1-3..Accept:
1..Accept-Encoding: gzip, deflate, compress;q=0.9..Accept-Charset:
ISO-8859-1, utf-8 ;q=0.66, *;q=0.66..Keep-Alive: 300..Connection:
Wow Shad, that's a total of .. well, let's just call it 80 chars * 8
lines, or 640 bytes. So, instead of one large image, of 4178 bytes, we
have two images. The second is loaded 36 times instead of being part of
the original, so we need 36 extra IMG SRC tags. We also need two GET
requests to the server instead of one, so add 640 bytes for the second
This gives us 3224 + 743 + 640 + 1008 = 5615 bytes. Now, instead of a
4178 byte image, we have used 5615 bytes. We've used up more file handles
on Apache, and we have a more CPU intensive experience for the end user.
I haven't even included a table or whatever you're going to put those
images into, and we're already more than 25% larger!
Wasn't this supposed to be a _save_ on bandwidth and space?
> Larger images are chopped up not for size saving alone, but usually for
> content requirements. EG the site skin, esp if it is a "theme" that is
> user changeable. Most of these things are also coded in perl or php,
> resulting in further savings as the browser needs communicate far less
> with the server like it would to build static content.
> Taking a picture of your favorite car that is say 800x600 in size and
> 150k and chopping it up will not save any bandwidth at all, the contrary
> is true and you still get whammed with the memory increase. Now an
> example of a bad hack job is my own site. It is, however, a mock up and
> not for general consumption. But Brad, no serious developer would do it
> like this on a live site, so your cautions seem rather pointless at this
> And the kids who are going to school, are learning precisely this.
"No serious developer?" "Kids who are going to school are learning this?"
No Shad, that's the problem. Developers do use slicing incorrectly. Many
developers aren't learning how to design web pages correctly. I've worked
with a lot of people, and I've seen some very, very scary cases of people
who had "graduated" and been in the field for years. I've also seen some
good people who have graduated, but were taught the wrong things, and
continued to use those habits until I met them.
Part of the reason is that people don't take into account the overhead of
getting that extra image from the server, and the extra HTML involved in
grabbing it. Even with gzip mod installed for Apache, you're still way
over the line in terms of byte count.
> Sometimes designers do seem a little chop happy, until you actually do a
> careful analysis of why (or try to modify a page). Then you begin to see
> that when dealing with web applications, it is often not just easier but
> necessary for performance and creativity.
You aren't paying attention. Several times through this thread, I have
mentioned that there _are_ legitimate reasons for chopping up graphics.
I'm not addressing those issues, or calling them incorrect.
I am, however, addressing the collection of developers out there that do
chop up every graphic they see, for no other reason than to cause them to
"load faster" or to "save space".
Trust me, I've seen it done more than once, and I've seen some doozies.
More information about the OCLUG