By definition, a CODEC starts with some series of bits (source) that encodes information into a new series of bits (encoding). A CODEC is said to be "lossy" when the results of the decoded encoding does not match the source, ie. information encoded in the source stream gets dropped.
Various techniques are used to accomplish this. One side of the scale is to simply degrade the quality of the original. The other is to use visual/acoustic changes humans have difficulty in recognizing. For instance, the human hear has a minimum distance between frequency and loudness of two tones. The softer tone is dropped in MP3 depending on "compression" level; thus, from a source fidelity standpoint, the quality is "lossy" since the encoded form no longer has the original's content. From a human ear quality standpoint, the quality is barely noticeable to most people.
Here's the issue I think gweinberg has: Imagine you encode an image to JPEG, with the "quality slider" cranked all the way to 11 (or whatever the maximum is). Now, the DCT blocks are tiny and the compression is super inefficient. But it exactly reproduces the input image, and you may say it's lossless. The same image encoded with PNG (deflate) might be way smaller, but that still does not change the fact that JPEG losslessly encodes this image.
Various techniques are used to accomplish this. One side of the scale is to simply degrade the quality of the original. The other is to use visual/acoustic changes humans have difficulty in recognizing. For instance, the human hear has a minimum distance between frequency and loudness of two tones. The softer tone is dropped in MP3 depending on "compression" level; thus, from a source fidelity standpoint, the quality is "lossy" since the encoded form no longer has the original's content. From a human ear quality standpoint, the quality is barely noticeable to most people.