Monday, February 24, 2014

Finding a Small High-Quality Digital Camera

Say: under $300.

This is not so easy. I’ve been trying myself: trying to find something small to travel with that also gives good results.

What’s so hard?

  • Smaller digital cameras often have very poor quality images.
    • Some say the size of the sensor is the key factor; others disagree.
  • The jpeg’s in digital cameras are generated on-the-fly. I suspect the engine that does this needs to be good, and they don’t dedicate the chip space/power in a small device to that end. (But that’s just my guess.)

What to do? Four pathways can help you make a choice; they are not mutually exclusive.

(While this blog is normally reserved for thoughts about teaching & learning technologies, a big question we often face is: how to choose the right tool, and so I thought "choosing the right camera" was close enough.)

1. Check out some thoughtful online resources.

  • A recommendation engine. Snapsort.com has a recommendation engine.
    • Their prices are based on the lowest found, and that includes used, so they’re quite misleading.
  • Sophisticated reviews.
    • I find DPreview intelligent and reliable.
      • They’re highly detailed, and so it’s often wise to skip to the next-to-last page of the review (“Conclusions”).
    • Cameralabs does thoughtful reviews. They tend to be only a few pages, but I still often skip to the last page (“Verdict and Scores”).
  • Others’ experiences. I actually find the ratings on Amazon.com and BHPhotoVideo very useful.
    • You can search in price ranges (never accurate on Amazon) and then sort by the average customer review.
    • On Amazon, you can also pick a rating (say: four stars), then sort by “Relevance,” which often excludes some odd items showing up in the wrong place.
    • Individual reviews need to be taken with a grain of salt, as people get cranky about things relevant only to them. But the good reviews are very helpful, and the aggregate is, I think, meaningful.
  • Inductively. Look on photo sites like Flickr.com to see what results others get. If they’re indifferent, either the camera is bad–or it has not inspired enthusiasts (which is a different thing, likely with some overlap).
  • Scientifically. DXOMark tests sensors and lenses and publishes the results.
    • N.B. They’re testing the sensor, not the camera. This seems finicky, but it lets you compare the image quality separate from questions like: are the buttons easy to use?
    • The results are wonky, but they are effectively reduced to numbers, and so this lets you make some rational choices–e.g., “$300 more for lower picture quality? I don’t think so!”
    • You can actually search by size and price, though “under $500” is basically the lowest category.

2. Ask people.

I’m writing this, so I’ll tell you my experiences (not my opinions).

Lately I’ve been testing some smaller, high-quality cameras.

  • The Olympus XZ–2 is the follow-up to the SZ–1.
    • I got this for sale around Xmas.
    • It’s still on sale these days for $300.
    • I’ve been shooting with it lately and quite like it.
    • It has close-up modes for flowers and the like.
    • It has a wide aperture, good for shooting in low light and for separating the subject from the background (i.e., “shallow depth-of-field”).
    • The user has access to manual controls (aperture, shutter speed, etc.). I believe you can also use it very automatically, though I haven’t used that mode.

3. Buy a small, pocketable camera that’s highly-rated by various sites.

  • Cameralabs writes guides for various categories of camera–such as compact cameras.
    • In this category, they recommend (among other, more-expensive cameras):

4. Get something that would normally be more expensive but that’s older and on sale (probably to make way for forthcoming new models).

  • In this category, you’ll find small ‘mirrorless’ cameras.
    • These are like a smaller digital SLR (DSLR).
    • They have interchangeable lenses, like the SLR’s of old. And the lens can be more expensive than the camera.
    • As with DSLR’s, they usually come with a lens whose effective cost is sometimes about $25–a tremendous bargain.
    • If you buy this, you’ll be in a ‘system’ (a combination of camera bodies and lenses and accessories), and so you can eventually get a better lens (even used).
  • In terms of size, these would be more ‘around the neck’ cameras than ‘pocketable,’ though sometimes the lens is a ‘pancake,’ which is small and flat, as the name implies.
  • *Be aware that because the sensor is smaller than 35mm film, the equivalent of a “50mm lens” on an SLR is often a different (much smaller number) on these cameras.
    • A “normal” lens (neither wide angle nor telephoto) is 40–50mm.
    • A wide angle lense might be 28–35mm.
    • A 90–200mm lens would be a telephoto for capturing objects at a distance.
    • The word “equivalent” tells you that, say, a 20mm lens acts like a 40mm lens on the camera advertised.
  • Sony, Olympus, Nikon and Pentax make excellent models.
    • BH Photo & Video has several of this type of camera now selling for under $300.
      • These include cameras whose sensors DXOmark rates very highly:
        • Sony NEX–3N: rated as 74.
        • Nikon 1 S1: rated as 56.
        • Nikon 1 J1: rated as 56.
        • Pentax Q10: rated as 49.
      • By comparison, the Olympus XZ2 I’m using now seems pretty good to me and has a sensor rating of 34.

A general tip: handle the actual camera, if you possible can.

    Try to go to the store and test whatever you want to buy. I find people discover they just don’t like a given software interface or where the buttons are. If you have trouble working the dingus right off, often it doesn’t get much better.

Tuesday, February 11, 2014

If You See Something, Say Something: Conspicuous Absences at ELI 2014

If You See Something, Say Something.

I’m not thinking of suspicious packages.

Rather, I’m thinking about the standards and ethics of our profession: folks who support teaching and learning with technology.

In that regard, I saw several things at ELI 2014 which made me want to say something, and that something is basically "What goes on here? What do we as a profession do? And why can we not have a connected discussion about that?

1. I saw a keynote give blatantly wrong facts.

Okay. People make mistakes. Sure.

But this presentation pretended to give a ‘scientific’ basis to teaching and learning.

Should conference presentations perhaps be required to use footnotes?

One writing teacher I know asks this of undergraduates. Students must give a handout that includes:

(1) a short prose summary and (2) a list of references.

Problem solved? Perhaps. But that wasn’t the only conspicuous absence of professional standards on display.

2. I saw a presentation arguing for a certain model of instruction, but the presentation made no reference to other models, nor to any concepts of learning, nor to any existing ideas.

This was an argument in a vacuum.

If we wouldn’t permit undergrads to do it, should we do it ourselves?

This lead me to a fear, which I now articulate. (See something, say something.)

Instructional technology as a profession seems to have no clear sense of standards of evidence––nor are these even really a part of the debate.

Think about any other discipline. History. Physics. Kinesiology.

  • You know what counts as evidence.
  • But you debate why some evidence is more meaningful than other kinds.
  • There are different schools and approaches, and they’re forced to duke it out.
  • Some standards and references are shared, some widely, some narrowly, while others are up for grabs.

Why should learning technology not be the same?

Nor are such issues just about evidence.

3. A presentation ostensibly about program evaluation offered no goal for the program, no significant research, numbers that were blatantly fudged.

Of course, if there is no goal, there can be no measuring. (Measure what?)

In this case I actually asked during the Q&A if there was any theory or concept or idea of learning driving the process. (I couldn’t ask about institutional goals, as the presenters had basically said “The Provost wanted it,” and it was clear no one after that point had even thought to tack on a goal as a fig leaf.)

The answer was: no, we don’t have instructional designers; we have Ph.D.’s. As if planning learning intentionally and being a scholar are somehow mutually exclusive.

It’s easy to understand this. In higher ed, the disciplines are the guardians of standards of knowledge.

  • The psychologists decide what psychology is.
  • The dance teachers decide whether dance is modern or ballet or rolling around on the floor.
  • The English professors decide what counts as literature and literary analysis.
  • Etc.

But it’s shocking to think that (for some at least) this excludes any role for thinking about teaching and learning––or even planning in its most basic sense.

All of which brought me to the terrible near-existential recognition of a central absence.

Instructional technology as a profession seems to have no shared framework for specifying goals and measuring results––hence justifying the value we create (potentially but not only ROI).

  • What kinds of things can we accomplish when we use technology to support learning?
  • What is the size or scope of our interventions?
    • Are we just making it easier to turn in homework?
    • Are we publishing things that were harder to publish before––like lectures?
    • Are we solving psychological problems? Economic problems? Cultural problems?

Of course, some goals are easy to pick out: convenience, efficiency and effectiveness.

  1. At this point in time, convenience reduces largely to what I call x-shifting.

    • Just as the VCR allowed TV shows to be shifted in time and place, now increasingly-smaller computers allow content and experience to be shifted in time, place and platform. These may not be the only forms of convenience, but they’re paramount.

  2. Efficiency is simply doing more with less.

    • We can promise this––but we mustn’t lie: a small-scale study I did at my prior institution showed what I think we all know. With any new technology, you must put in more time at first in order to save time later.
    • This points up a little-mentioned analogy, which really ought to be the core of what we do in learning technology: learning a new technology is itself a species of learning, hence a microcosm for learning-in-general. Helping people learn to use a new technology helps them to re-see with new eyes the phenomenon of learning.

  3. Effectiveness is where we lose all our bearings. Ideally, we’d like to make teaching more effective, for it to generate more learning. But how?

    • What are the drivers of learning? Where are the pedals and the steering wheel? We don’t have a good taxonomy.

      • Better motivation? Sure.
      • Good chunking for better cognitive processing? Okay.
      • Better sequencing of instruction? Absolutely.

But do we have a clear picture of the whole shape of such goals?

I fear not.

When I see something, I can say something.

But that’s different from knowing the answers.

Five Takeaways from ELI 2014 in New Orleans



An academic IT conference in New Orleans begs to be told as a story.

But those stories are mostly about good food and good company. (Beignets!)

The actual “what I learned at ELI 2014” squeezes nicely into a list––or rather, a table.

I personally found five presentations (panels, presentations, poster sessions) compelling. Happily, some of the most expert presenters generously shared their visuals–as Powerpoints or PDF’s.


Five Presentations and Some Resources

ONE

{title} Extreme Makeover - Course Edition:
Inspiring Faculty to Innovate and Collaborate in Instructional Design
{what it was} SFSU instructional designers created a course-redesign program
to efficiently support 25 faculty at a time.
{why it’s cool} Staff used a robust and appealing instructional design process for the faculty workshop itself. It wasn’t a question of telling faculty how to teach; rather, the staff actually gave the instructors a positive learning experience and the means to transfer that experience to their own courses.
{the files} The Workshop Process.The Faculty Takeaways.

TWO

{title} Google Glass: Implications for Teaching and Learning in Music and Digital Storytelling
{what it was} Two different use cases of Google Glass in higher education: one liberal arts, one for professional education (communication studies and orchestral conducting, respectively).
{why it’s cool} The two use cases seem indicative of broad types of education (liberal arts vs. professional training), and so though the cases are specific, the implications seem broad.


  • The liberal arts use of Google Glass involves capturing video of first-person experience and then subjecting it to critical thought and reflection through the process of editing––much as one does with prose writing.
  • The professional education use of Google Glass involves allowing the neophyte’s POV to be captured via video and then subject to critique, analysis and supportive mentoring by an expert.
{the files} A Liberal Arts Use Case.[As of writing, the Professional Education Use Case PPT was not posted.]

THREE

{title} Diving Deep into Data: Motivations, Perceptions, and Learning in Minnesota MOOCs
{what it was} Careful analysis discloses that MOOC users fall into two groups: grazers and strivers. Strivers work hard to overcome the inherent obstacles of the format. But English language skills are an important pre-requisite, and their lack is one of the biggest obstacles to learner success in a MOOC.
{why it’s cool} Careful data collection around MOOCs can actually tell us something about who benefits––so we can make inferences about why and even plan the broad distribution of educational materials accordingly.
{the files} The Powerpoint.

FOUR

{title} Assessing Student Learning through the Use of Digital Video and Data Mining
{what it was} “Real-Time Mining of Student Notes and Questions” by Perry J. Samson, a meteorology professor. Samson showed how the LectureTools application let him build assessment into his classroom presentations so he could determine what teaching students needed.
{why it’s cool} The instructor can “assess as he goes,” and the students can review material later, including taking their own notes and sharing notes.
{the files} As of writing, the PPT was not shared.

FIVE

{title} Moving Math Online: Technology Solutions
{what it was} A straightforward workflow for creating online learning materials that include handwritten equations.
{why it’s cool} The approach supports many technologies.
{the files} The Tool Handout.