Concise and clear, with examples that should speak to computer scientists.
However, I've always failed to see the practical interest of categories. It's nice to see how various things can be described or defined in a categorical framework. It may shade new light on some concepts, but I didn't see any useful theorems you could apply (like in vector spaces for instance). Probably I didn't go deep enough.
> However, I've always failed to see the practical interest of categories. It's nice to see how various things can be described or defined in a categorical framework. It may shade new light on some concepts, but I didn't see any useful theorems you could apply (like in vector spaces for instance). Probably I didn't go deep enough.
Almost by definition, you can't see the point of category theory until you go deep enough. Because statements in category theory are, in a very profound sense, statements about everything, they must logically be less substantial (or at least no more substantial) than statements about any particular thing.
If that is so, then why bother making these less substantial statements? The point is precisely that they do apply to everything, so that your knowledge about categorical statements pertaining to vector spaces instantly translates to knowledge of exactly the same statements about groups, or modules, or … whatever (as long as the relevant categories satisfy the same hypotheses).
That is, from my point of view at least, the utility of category theory is almost never in what it has to say about any one discipline, but rather the connections it makes, or at least the 'automated transfer of knowledge' that it allows.
EDIT: Also, I think that, much like the monad tutorial profusion, the large number of resources for learning category theory makes people think that they are obligated to go out and find a use for it. I think one is much less likely to be successful with the mindset "where can I use category theory?", and much more likely to be successful with the mindset "I have this language in my lexicon, and so will recognise the opportunity to speak it when it arises."
You don't need to go deep. Types and functions (ignoring non-termination) form a category. Functional programming (Haskell programmers in particular) uses many results from it.
e.g.
building any "container" type (collections, options, futures,etc..) is the mapping of objects of an (endo) functor, the mapping of functions is given by fmap/map
As an example, just from the functor laws you get map (f.g) = map f . map g
Natural transformations between functors gives you parametricity and "theorems for free"
Another result that's used directly is the CoYonneda lemma (but I'm still trying to understand it :) )
The point is: category theory in spite of being totally abstract offers many interesting results that can be applied to day-to-day coding :)
It just means that you have a category where the objects are CPOs instead of sets or types or whatever. It can actually add a whole lot of value to describing the laziness story.
When I was in a freshman math course, a professor started the semester with a warning: "Trying to learn math without solving exercises is like trying to teach a toddler to walk from physiology textbooks". Yet, I haven't found a good source of exercises for introductory category theory - is it the case that exercises for this subject are hard to come by? Or maybe it's a relatively advanced subject, so there are few books, and as always most books don't include exercises for some strange reason? Anyway, would love to hear where I can find exercises on the subject, if anyone knows. (BTW, I don't mean sprinkling an exercises or 3-4 here and there like in the OP, I mean like 10-15 exercises per lecture, which was the standard when I was a freshman)
Lawvere and Schanuel's excellent "Conceptual Mathematics" has plenty of exercises, and the book is insanely approachable while still teaching a good amount (it's chock full of examples).
Mac Lane (Categories for the Working Mathematician) also has exercises and I think the ones from the first few chapters aren't so hard.
Oh nice. This is actually a nice survey of the basics and doesn't assume I have a M.S. or PhD in Mathematics already. I took a quick look through and anybody who's at least learned about working-level Set Theory and can follow various logic notations (basically any CS major) can probably work through this without too much fuss.
There are the Catsters which have produced a large and wonderful set of videos teaching Category theory. This is the best guide I know to watching those videos:
This is _incredibly_ naive because I'm a total layman when it comes to both, but is there a relationship between category theory and semiotics? It seems like indexes and symbols are similar to categories and morphisms. Again, I really don't know what I'm talking about as I've barely scratched the surface of either but it would be awesome to hear from someone who does so they can tell me that I'm misunderstanding both things (and if I'm lucky, _how_ I'm misunderstanding them).
I don't have a clue about semiotics, but "The Adjunction Between Syntax and Semantics" [0] gets bandied about a lot in this space. There's also an attempt to use adjunctions to explain "generalization" which might be relevant [1]. Generally, "adjunctions are everywhere".
The relationship you're sensing appears very general, in the sense that "is there a relationship between Category Theory and Linguistics?" I definitely think there's a relationship because that's how the human intellect operates, everything is a relationship, but I do not think there is an intrinsic and meaningful relationship outside the context of general cognition.
I think Semiotics is important and the study of what the symbols in Category Theory are or why they are used is a meaningful study (and would enable a deeper understanding) but I don't think it's a prerequisite for comprehending Category Theory itself.
I figured that semiotics wouldn't at all be required to understand category theory (or vice versa). More that the general language of sign/signified and so forth feels similar in some vague way. But again, you're probably right in that the brain likes to find patterns even when they aren't really there.
Firstly, there simply aren't enough symbols. In programming, naming functions and variables is hard, really hard, and in math you don't have the luxury of giving long names to every separate concept. Part of the value of the notation is its compactness.
Secondly, symbols that are overloaded usually have related meanings, and much of the value in the notation is the abuse of notation that you get from it. You may think it just makes everything confusing and harder, but in truth, learning how to work with the notation is incredibly valuable, and leads to a deeper understanding of what things really are, and how they work.
That depends. Disambiguating these things makes them easier to read, easier to think you understand what's going on. However, in my experience the mental effort involved in working out what the symbols mean in their different contexts is essential to actually internalize and understand.
Yes, if you just want to read it without actually gaining any deep understanding or skill, use boldface, colors, words, mouse-over pop-ups. But if you want to learn something and be able to do it, you need to struggle with it. Only by doing the work do you gain the skill.
As in programming, simply reading programs gains you little, if anything. If you want to grow in skill, you need to work through the detail of programs you read, and do some programming yourself.
http://www.amazon.com/Category-Computer-Scientists-Foundatio...
Concise and clear, with examples that should speak to computer scientists.
However, I've always failed to see the practical interest of categories. It's nice to see how various things can be described or defined in a categorical framework. It may shade new light on some concepts, but I didn't see any useful theorems you could apply (like in vector spaces for instance). Probably I didn't go deep enough.