fluid-work IRC Logs-2012-06-22
[02:49:30 CDT(-0500)] <logiclord_> how to access parent or sibling of a component ?
[03:44:43 CDT(-0500)] <SonicX> logiclord_: in jquery ?
[03:45:21 CDT(-0500)] <logiclord_> SonicX in infusion
[08:27:22 CDT(-0500)] <jessm> open author: http://us2.campaign-archive2.com/?u=32177111dabb558fc401595e5&id=86e124b276
[09:17:35 CDT(-0500)] <anastasiac> michelled, I have a branch to fix the 'My OER' page title ready for review: https://github.com/acheetham/OER-Commons/tree/608-my-oer-title
[09:18:27 CDT(-0500)] <cindyli> anastasiac: just saw your email about the access issue to oercommons private repo. Does it mean you guys cannot fork or see the new repo?
[09:18:35 CDT(-0500)] <cindyli> i have no problem with that
[09:18:42 CDT(-0500)] <anastasiac> michelled, I also implemented InlineEdit for the OER title in OpenAuth. It's ready except for screen reader testing: https://github.com/acheetham/OER-Commons/tree/565-editable-title, not sure if you want to review yet
[09:18:47 CDT(-0500)] <anastasiac> cindyli, where is the repo?
[09:18:54 CDT(-0500)] <cindyli> https://github.com/ISKME/OER-Commons
[09:18:59 CDT(-0500)] <cindyli> where it was
[09:19:23 CDT(-0500)] <anastasiac> cindyli, that page gives me '404' and my remote in git no longer responds
[09:19:35 CDT(-0500)] <cindyli> ah ha, interesting
[09:19:42 CDT(-0500)] <anastasiac> I guess you have permissions, and I don't?
[09:19:58 CDT(-0500)] <cindyli> maybe
[11:07:44 CDT(-0500)] <sgithens> Does anyone know how to get the architecture list to send you copies of the emails you send? When I look at my preferences, that option is checked, but I never get them.
[11:08:45 CDT(-0500)] <logiclord> how to access member functions of sibling or parent component e.g. bookHanlder has 3 child fileFacilitator, parser and navigator.
[11:08:46 CDT(-0500)] <logiclord> how can parser and navigator access member functions of fileFacilitator ??
[11:25:47 CDT(-0500)] <logiclord> yura:ping ?
[11:26:20 CDT(-0500)] <yura> logiclord: hi
[11:27:44 CDT(-0500)] <yura> logiclord: so to answer your question: you can refer to the parent component via ioc in options or defaults of your component
[11:28:08 CDT(-0500)] <yura> logiclord: can you give a sample code of your particular example ?
[11:29:23 CDT(-0500)] <colinclark> sgithens: Just saw your note now
[11:29:38 CDT(-0500)] <colinclark> I'm not quite sure how to do that
[11:30:07 CDT(-0500)] <colinclark> I subscribe with two different addresses, so I end up getting them anyway, but you'd think that Mailman would respect the setting
[11:30:19 CDT(-0500)] <sgithens> yeah, it's wierd
[11:30:23 CDT(-0500)] <colinclark> I can ask our Avtar if he has any insights
[11:30:28 CDT(-0500)] <sgithens> when I found out it was actually checked
[12:13:11 CDT(-0500)] <logiclord_> yura: so I have to pass sidling ??
[12:13:28 CDT(-0500)] <logiclord_> yura: e.g. bookHanlder has 3 child fileFacilitator, parser and navigator.
[12:14:27 CDT(-0500)] <logiclord_> a method in parser and navigator (Both) need to use a method of fileFacilitator
[12:18:00 CDT(-0500)] <logiclord_> yura: currently it is working.. I have passed fileFacilitator child of bookHandler to the member functions of parser and navigator
[12:18:02 CDT(-0500)] <yura> logiclord_: well you do not have to pass it, if it's just a specific field that you want from a sibling, just pass it in options/defaults as " .optins.option"
[12:18:28 CDT(-0500)] <yura> logiclord_: the framework and IOC will resolve the string value to an actual value when options are merged
[12:19:41 CDT(-0500)] <logiclord_> yura: Its a member I want to access a method (though just figured out a way around)
[12:19:50 CDT(-0500)] <logiclord_> ^mmber function
[12:20:42 CDT(-0500)] <yura> logiclord_: so if you push your stuff on github then i might give you better suggestions , what do you think ?
[12:21:27 CDT(-0500)] <logiclord_> yura: just trying to do so
[12:21:50 CDT(-0500)] <logiclord_> sry for the delay
[12:22:22 CDT(-0500)] <yura> logiclord_: no worries, i think I might even be able to leave inline comments in github
[12:23:47 CDT(-0500)] <logiclord_> yura: code is still in primitive stage of componenet.. need to add selectors, provide more options, viewComponent etc
[12:23:53 CDT(-0500)] <logiclord_> done
[12:24:12 CDT(-0500)] <logiclord_> https://github.com/logiclord/web-based-epub-reader/blob/master/js/epubReader.js
[12:25:25 CDT(-0500)] <logiclord_> line 424.. I made it work by passing fileobj i.e. filefacilitator
[14:34:30 CDT(-0500)] <anastasiac> michelled, I have a branch for 566 ready for review: https://github.com/acheetham/OER-Commons/tree/566-steps-alt-text
[14:35:05 CDT(-0500)] <anastasiac> michelled, I've also tested the InlineEdit work with a screen reader, and that branch is ready for review: https://github.com/acheetham/OER-Commons/tree/565-inlineEdit
[14:35:34 CDT(-0500)] <anastasiac> michelled, also a reminder: the 'my OER' page title branch is ready for review: https://github.com/acheetham/OER-Commons/tree/608-my-oer-title
[15:19:48 CDT(-0500)] <thealphanerd> is anyone around who can answer a question about the infusion getting started? tutorial
[15:20:06 CDT(-0500)] <thealphanerd> specifically regarding the rendering component
[15:20:33 CDT(-0500)] <thealphanerd> having a bit of a hard time wrapping my head around the producetree
[15:20:42 CDT(-0500)] <michelled> thealphanerd: anastasiac is probably your best bet
[15:20:54 CDT(-0500)] <michelled> anastasiac: thx for all the branches
[15:21:06 CDT(-0500)] <thealphanerd> thank you michelled
[15:21:19 CDT(-0500)] <thealphanerd> anastasiac: let me know if you are around … I will be waiting patiently
[15:22:01 CDT(-0500)] <anastasiac> thealphanerd, I'm here. before getting into produceTree, do you think you understand component trees?
[15:22:20 CDT(-0500)] <thealphanerd> maybe
[15:22:21 CDT(-0500)] <thealphanerd>
[15:22:58 CDT(-0500)] <alexn1> michelled: how can I ask Sergey a question through Assembla ?
[15:23:21 CDT(-0500)] <thealphanerd> not 100% where the component tree is in this currency converter example (or should I say that currency converter )
[15:23:22 CDT(-0500)] <michelled> alexn1: there is a 'messages' tab in the interface - create a message there
[15:23:29 CDT(-0500)] <anastasiac> well, the idea behind produceTree is that you create a function that returns your desired component tree. This is a good option if your component tree is dependent on things that are known at runtime. If your component tree is pretty much static, there are other options
[15:23:54 CDT(-0500)] <anastasiac> do you have any specific questions, thealphanerd, or just looking for some suggestions?
[15:24:08 CDT(-0500)] <thealphanerd> I'm Colin's GSOC student
[15:24:19 CDT(-0500)] <thealphanerd> and I'm wokring on converting the work I have right now to infusion components
[15:24:38 CDT(-0500)] <thealphanerd> trying to work out the best way to structure / program it
[15:25:35 CDT(-0500)] <thealphanerd> I already have all of my html being rendered using the d3 library and jquery… so just trying to wrap my head around how to implement using a combination of view / model / renderer components
[15:27:01 CDT(-0500)] <anastasiac> well, thealphanerd, you're on the right track. the first step would be to learn a bit about renderer component trees and work out what you'll need to render your interface. Have you looked over http://wiki.fluidproject.org/display/docs/Renderer+Component+Trees and http://wiki.fluidproject.org/display/docs/Renderer+Component+Tree+Expanders yet?
[15:27:04 CDT(-0500)] <thealphanerd> as an aside… there is no Html example to go along with the rendering component example, so I don't actually have the source in front of me to help figure out how things are going on
[15:27:23 CDT(-0500)] <thealphanerd> reading the bit on render component trees right now
[15:27:27 CDT(-0500)] <anastasiac> thealphanerd, yes, sorry about that. the tutorial is a work in progress :-/
[15:27:32 CDT(-0500)] <thealphanerd> no problem
[15:27:35 CDT(-0500)] <thealphanerd> here to help
[15:28:52 CDT(-0500)] <anastasiac> thealphanerd, what is your interface about? what's being rendered? is it repeated data, or form-type stuff?
[15:29:06 CDT(-0500)] <thealphanerd> https://github.com/thealphanerd/piano
[15:29:13 CDT(-0500)] <anastasiac> ah...
[15:29:16 CDT(-0500)] <thealphanerd> in browser piano
[15:29:25 CDT(-0500)] <thealphanerd> so there are two different views that can be rendered (so far)
[15:29:28 CDT(-0500)] <thealphanerd> a piano and a grid
[15:29:54 CDT(-0500)] <thealphanerd> there is a component neccessary for creating the oscillator
[15:30:17 CDT(-0500)] <anastasiac> thealphanerd, will the two views be mapped to the same internal model, or both onscreen together, mapping to different areas of a model?
[15:30:24 CDT(-0500)] <thealphanerd> identicle models
[15:30:28 CDT(-0500)] <anastasiac> neat
[15:30:41 CDT(-0500)] <thealphanerd> unicron.biz/piano
[15:30:49 CDT(-0500)] <thealphanerd> so the interface will be drawn in "layers"
[15:31:00 CDT(-0500)] <thealphanerd> still figuring out the logic
[15:31:07 CDT(-0500)] <thealphanerd> but the idea is you can declare key types
[15:31:23 CDT(-0500)] <thealphanerd> each key type has height / width / etc
[15:31:45 CDT(-0500)] <thealphanerd> everything is an svg, and gets rendered in layers to allow stacked for the piano keyboard
[15:31:59 CDT(-0500)] <thealphanerd> or is all a single key type for grids (unless you want various colors)
[15:32:43 CDT(-0500)] <thealphanerd> everything is generated based on the model
[15:32:58 CDT(-0500)] <thealphanerd> so I'm thinking there will be a render, and one of the options will be picking the view
[15:32:59 CDT(-0500)] <anastasiac> so thealphanerd, the idea with the infusion renderer is that you create an html template that contains the markup to be used by the renderer. the 'component tree' is the instructions for how to use that markup and how to map it to the data model
[15:33:28 CDT(-0500)] <thealphanerd> anastasiac: what if the markup is being generated and changed on the fly?
[15:34:01 CDT(-0500)] <anastasiac> what triggers the changes, is it changes to the model?
[15:34:06 CDT(-0500)] <thealphanerd> yup
[15:34:14 CDT(-0500)] <thealphanerd> that's where redraw would come in right?
[15:34:50 CDT(-0500)] <anastasiac> iirc, the renderer can automatically bind the markup to the model such that it will (or can be made to) re-render when the model changes
[15:35:20 CDT(-0500)] * anastasiac hasn't used the renderer recently, so my recall is rusty
[15:35:26 CDT(-0500)] <thealphanerd> this is where I am getting confused
[15:35:39 CDT(-0500)] <thealphanerd> because as it is right now… my markup is literally a single div
[15:35:52 CDT(-0500)] <thealphanerd> and I find that element in the dom and generate everything inside of it
[15:36:55 CDT(-0500)] <anastasiac> you mean you hardcode the html being generated in your code?
[15:37:20 CDT(-0500)] <thealphanerd> nope
[15:37:40 CDT(-0500)] <thealphanerd> well hardcoded yes
[15:37:50 CDT(-0500)] <thealphanerd> (I keep getting mixed up between soft / hard)
[15:37:57 CDT(-0500)] <thealphanerd> the html file has a single div
[15:38:01 CDT(-0500)] <thealphanerd> inside the body
[15:38:28 CDT(-0500)] <thealphanerd> then the js drawn a canvas in that div… and all the svg eleements
[15:38:34 CDT(-0500)] <thealphanerd> based on a model
[15:39:25 CDT(-0500)] <anastasiac> ah, interesting
[15:39:53 CDT(-0500)] <thealphanerd> that way you can make all sorts of keyboards
[15:39:54 CDT(-0500)] <thealphanerd> or instruments
[15:39:57 CDT(-0500)] <thealphanerd> of various tuning systems
[15:40:01 CDT(-0500)] <thealphanerd> or sizes
[15:40:12 CDT(-0500)] <anastasiac> so the piano is not actually html, it's a canvas element?
[15:40:22 CDT(-0500)] <thealphanerd> svg elements
[15:40:41 CDT(-0500)] <thealphanerd> you just need a single dom element to put it in to
[15:40:41 CDT(-0500)] <anastasiac> hm
[15:41:29 CDT(-0500)] <thealphanerd> truthfully I think I could pull off what I'm trying to do with simply using model / event components
[15:41:32 CDT(-0500)] <anastasiac> the Renderer has never been used with svg element, you're charting new territory
[15:41:36 CDT(-0500)] <thealphanerd> but I'm trying to understand the entire framework
[15:41:49 CDT(-0500)] <anastasiac> my svg knowledge is pretty limited
[15:42:05 CDT(-0500)] <thealphanerd> I'm having mixed feelings about it LD
[15:42:19 CDT(-0500)] <thealphanerd> but it is really simpe to make responsive interfaces
[15:42:28 CDT(-0500)] <anastasiac> I wonder if Bosmon might be able to advise on the Renderer - he knows it pretty well
[15:42:52 CDT(-0500)] <thealphanerd> I like the idea of the "redraw" functionality of the renderer
[15:43:09 CDT(-0500)] <thealphanerd> and see that as something I could easily have to program myself to get this all working
[15:43:19 CDT(-0500)] <thealphanerd> and would rather rewrite the wheel
[15:43:45 CDT(-0500)] <thealphanerd> would you maybe be able to figure out what the html for the renderer example should be?
[15:43:48 CDT(-0500)] <thealphanerd> that might help me a lot
[15:44:13 CDT(-0500)] <anastasiac> I'll have a look, thealphanerd
[15:44:21 CDT(-0500)] <thealphanerd> thank you anastasiac you are awesome
[15:45:32 CDT(-0500)] <anastasiac> thealphanerd, have you looked at our renderer instructional demos? http://wiki.fluidproject.org/display/docs/Renderer+Instructional+Demos
[15:45:49 CDT(-0500)] <thealphanerd> this I had not seen
[15:45:59 CDT(-0500)] <thealphanerd> this is worth checking out
[15:46:00 CDT(-0500)] <thealphanerd>
[15:46:17 CDT(-0500)] <anastasiac> they're pretty simple examples, but they might help you understand the relationships between model, component tree and html template
[15:46:34 CDT(-0500)] <thealphanerd> it's pretty amazing… a month ago I looked at this tutorial and it was gibberish… and I finally understand it, not just how to implement, but why its awesome
[15:47:06 CDT(-0500)] <thealphanerd> I see the value in infusion for sure
[15:47:22 CDT(-0500)] <anastasiac> well that's nice to hear
[15:47:30 CDT(-0500)] <thealphanerd> anastasiac: what's your role on the team?
[15:47:54 CDT(-0500)] <anastasiac> thealphanerd, I'm one of the developers, but I'm also in charge of the documentation
[15:48:57 CDT(-0500)] <thealphanerd> anastasiac: as someone who walked in to this very fresh… it might prove useful to explain a few things that might seem "obvious" to someone wanting to use the framework. Specifically a basic introduction to closure, and their role in functional programming
[15:49:23 CDT(-0500)] <thealphanerd> although I understand that something like that may be outside of the scope of the doc… but I found it really hard to understand what was going on without wrapping my head around that first
[15:49:44 CDT(-0500)] <anastasiac> thealphanerd, good suggestion. It's always helpful to hear feedback from people who are new to Infusion. Please keep the suggestions coming!
[15:50:06 CDT(-0500)] <thealphanerd> I think there is a lot of philosophy embedded in infusion
[15:50:30 CDT(-0500)] <thealphanerd> and you definitely touch upon it in the framework concepts
[15:51:01 CDT(-0500)] <thealphanerd> but that documentation read to me as if it was written for those who already have a fairly firm understanding of js / development
[15:52:07 CDT(-0500)] <thealphanerd> but I guess that raises a question as to who will be using infusion…
[15:54:41 CDT(-0500)] <anastasiac> we do kind of assume that you know js, but maybe some pointers to documentation on the coding principles we use would be a good idea
[15:55:27 CDT(-0500)] <thealphanerd> ahhh so you use the produceTree to connect the selectors to the model
[15:55:50 CDT(-0500)] <thealphanerd> interesting… I don't know if that would neccesarrily translate to what I'm doing
[15:56:15 CDT(-0500)] <thealphanerd> but I guess I need then question how I am generating everythign
[15:56:42 CDT(-0500)] <thealphanerd> this will be very useful for other things in the app though
[15:59:32 CDT(-0500)] <anastasiac> thealphanerd, "use the produceTree to connect the selectors to the model" is not quite correct. the component tree defines the relationship between your model and your components; the list of cutpoints directly maps your selectors to your component
[16:00:03 CDT(-0500)] <thealphanerd> hmmmm, ok maybe I need to dig a bit deeper
[16:00:07 CDT(-0500)] <thealphanerd> over simplifying
[16:00:54 CDT(-0500)] <anastasiac> the 'components' in the component tree are actually kind of virtual concepts, like "a selection between choices". That could be rendered as checkboxes, or radio buttons. but the component tree would be the same
[16:01:46 CDT(-0500)] <anastasiac> the list of selectors functions as the cutpoint list if you don't actually provide one
[16:01:55 CDT(-0500)] <anastasiac> which is the simplest way to do it
[16:02:46 CDT(-0500)] <thealphanerd> OH… so I could supply custom a cutpoint list, to connect the produceTree to various elements of different components?
[16:03:12 CDT(-0500)] <thealphanerd> does each component store its own model? or is it possible to have multiple components share models
[16:03:23 CDT(-0500)] <anastasiac> thealphanerd, yes, you can provide a custom cutpoint list. Using the selectors list is kind of a short-cut
[16:03:27 CDT(-0500)] <thealphanerd> or is this something you can create through design
[16:04:10 CDT(-0500)] <anastasiac> thealphanerd, I'm unfortunately going to have to double-check your meaning of 'component' in that last question. Unfortunately, we've overloaded that word in Infusion.
[16:04:26 CDT(-0500)] <thealphanerd> fair enough
[16:04:40 CDT(-0500)] <anastasiac> very loosely (and probably oversimplifying), you're probably creating a piano 'component' that would have a model
[16:04:44 CDT(-0500)] <thealphanerd> I was under the impression that your app is made of various components
[16:04:50 CDT(-0500)] <anastasiac> right
[16:04:52 CDT(-0500)] <anastasiac> correct
[16:04:55 CDT(-0500)] <thealphanerd> so I'll have a piano component
[16:05:01 CDT(-0500)] <thealphanerd> a oscillator compoennt
[16:05:04 CDT(-0500)] <thealphanerd> a grid component
[16:05:11 CDT(-0500)] <anastasiac> when we talk about renderer 'component trees', those components are different than your app components
[16:05:14 CDT(-0500)] <anastasiac> confusing, I know
[16:05:21 CDT(-0500)] <thealphanerd> oh ok
[16:05:28 CDT(-0500)] <thealphanerd> what components are those?
[16:05:44 CDT(-0500)] <thealphanerd> (this is better, the other way was confusing me )
[16:05:53 CDT(-0500)] <anastasiac> if you're rendering a form, for example, each control (text input, radio button, etc) would have a 'component' in the component tree
[16:06:08 CDT(-0500)] <anastasiac> very, very different than your app components!
[16:06:55 CDT(-0500)] <anastasiac> the 'components' in the component tree don't have a model, they are used to render one of the pieces of data in your Component's model (like your grid component)
[16:07:17 CDT(-0500)] <thealphanerd> ok
[16:07:22 CDT(-0500)] <anastasiac> so your grid Component would have a model, and the component tree it renders might have a 'component' for each square
[16:07:26 CDT(-0500)] <thealphanerd> so each key would in essence be a component
[16:07:36 CDT(-0500)] <anastasiac> a renderer component, yes
[16:07:41 CDT(-0500)] <anastasiac> but not an app Component
[16:07:45 CDT(-0500)] <thealphanerd> nope
[16:07:56 CDT(-0500)] <anastasiac> (we really need to find a different name for one or the other!)
[16:07:56 CDT(-0500)] <thealphanerd> the piano itself would be the app component
[16:08:12 CDT(-0500)] <thealphanerd> element for the render components?
[16:08:16 CDT(-0500)] <thealphanerd> render element
[16:08:45 CDT(-0500)] <anastasiac> element is also a bit overloaded, since it is used for HTML elements, and renderer 'foodles' are not html elements
[16:09:01 CDT(-0500)] <thealphanerd> sigh
[16:09:04 CDT(-0500)] <thealphanerd> damn linguistics
[16:09:08 CDT(-0500)] <anastasiac> indeed
[16:09:14 CDT(-0500)] <thealphanerd> you should just use an abstract symbol
[16:09:15 CDT(-0500)] <thealphanerd> lol
[16:09:37 CDT(-0500)] <anastasiac> "the thing formerly known as a component"
[16:09:47 CDT(-0500)] <thealphanerd> we shall call them purple rains
[16:09:52 CDT(-0500)] <thealphanerd> droplets?
[16:09:52 CDT(-0500)] <anastasiac>
[16:10:15 CDT(-0500)] <thealphanerd> actually you know what would be clever and boarder on a pun
[16:10:17 CDT(-0500)] <thealphanerd> leafs
[16:10:26 CDT(-0500)] <thealphanerd> but now we are getting side tracked
[16:10:39 CDT(-0500)] <thealphanerd> (although I love puns in programming frameworks, easier to remember)
[16:13:48 CDT(-0500)] <anastasiac> thealphanerd, I have to head out now, but I'll be online next week
[16:14:20 CDT(-0500)] <anastasiac> and lots of other people in the channel could also be helpful, if I'm not around
[16:15:37 CDT(-0500)] <anastasiac> have a great weakend
[16:16:18 CDT(-0500)] <thealphanerd> and I didn't even get to say thank you… nex ttime I guess
[18:36:47 CDT(-0500)] <travis_84> hey bosmon, did you get the link to github?
[18:37:05 CDT(-0500)] <Bosmon> travis_84 - I didn't
[18:37:08 CDT(-0500)] <Bosmon> Could you send it over again?
[18:37:49 CDT(-0500)] <Bosmon> Where did you put it before
[18:38:21 CDT(-0500)] <travis_84> https://github.com/travis-love/Eleuthera
[18:38:32 CDT(-0500)] <Bosmon> Excellent
[18:38:49 CDT(-0500)] <travis_84> I posted here before
[18:39:25 CDT(-0500)] <travis_84> anyway I have an issue with WAMI
[18:39:35 CDT(-0500)] <Bosmon> Ignore it
[18:39:52 CDT(-0500)] <Bosmon> Let me have a look what requirements we have to meet before the midterms....
[18:41:42 CDT(-0500)] <travis_84> ok... I am going to apologize now then, I feel I dropped the ball
[18:41:44 CDT(-0500)] <Bosmon> I'm keen to see you get to the point where you can grapple with some of the more fundamental design and UX issues before we run out of time.... separately from having a highly portable and multi-platform app in the medium term
[18:41:52 CDT(-0500)] <Bosmon> Since these final weeks will fly by very quickly!
[18:42:45 CDT(-0500)] <Bosmon> Ok... midterms in 3 weeks... final end in 2 months
[18:43:10 CDT(-0500)] <Bosmon> I think we should get together and try to make some detailed timeline for how we hope the rest of the work will be mapped out
[18:43:20 CDT(-0500)] <Bosmon> No, don't worry, I don't think we have dropped the ball
[18:44:07 CDT(-0500)] <Bosmon> But I think we do need to focus on things in greater detail now... the "hard core" of the project is really about how any app of this kind can work AT ALL
[18:44:25 CDT(-0500)] <Bosmon> Assuming there is a workable user idiom behind it, the portability profile can always be tidied up whenever, even after the project is over
[18:44:38 CDT(-0500)] <Bosmon> There are always plenty of people to do work of that kind
[18:45:00 CDT(-0500)] <Bosmon> But there isn't always access to people of your level of insight and flexible thinking
[18:45:38 CDT(-0500)] <travis_84> well, thanks
[18:45:48 CDT(-0500)] <Bosmon> Demonstrating ONE usable workflow on ONE platform will make this project a success
[18:47:27 CDT(-0500)] <Bosmon> This isn't a grubby "sweat the implementation details and rough spots" project...... this is a blue sky "how is this even possible at all!" project
[18:48:05 CDT(-0500)] <travis_84> blockily is a real step forward
[18:48:28 CDT(-0500)] <Bosmon> Well.... it is really a step sideways
[18:48:33 CDT(-0500)] <Bosmon> It is not really as impressive as it appears
[18:48:41 CDT(-0500)] <Bosmon> Since despite its finish, it introduces no new interaction models
[18:49:28 CDT(-0500)] <Bosmon> It's basically MIT's "Scratch" ported into JavaScript
[18:49:35 CDT(-0500)] <Bosmon> And we didn't get out of bed for that kind of thing
[18:50:02 CDT(-0500)] <Bosmon> The core question is.... HOW can people build data and code, without using a primarily visual idiom?
[18:50:09 CDT(-0500)] <Bosmon> And Blockily doesn't do anything to address that question
[18:50:50 CDT(-0500)] <travis_84> true
[18:51:26 CDT(-0500)] <Bosmon> When I was young, I was also prone to this "Oh my God, they've solved it ALL" panic : P
[18:51:37 CDT(-0500)] <Bosmon> But actually, real progress is rare and unexpected....
[18:51:57 CDT(-0500)] <Bosmon> Most things that are made are just already existing things, translated into a new box
[18:52:05 CDT(-0500)] <Bosmon> And Blockily exactly fits that pattern....
[18:53:05 CDT(-0500)] <Bosmon> As an "implementation detail", it might nice to see how a sucessful design could integrate with their UI and codebase..... but that's no more urgent than getting the Flash fallback working
[18:54:34 CDT(-0500)] <Bosmon> I don't think there's even very much to be learned by looking at their interaction model, since everything that is there, you could have learned from Scratch
[18:55:07 CDT(-0500)] <travis_84> I'd rather see Oprah get the audio stream working then solve the flash issue
[18:55:38 CDT(-0500)] <Bosmon> And the reason that Scratch isn't setting the world on light isn't that it's not written in JavaScript, it's that it isn't a usable idiom that anyone would freely choose for building content, given the alternatives
[18:55:47 CDT(-0500)] <Bosmon> Is there one browser that works better than the others?
[18:55:51 CDT(-0500)] <Bosmon> How about Firefox
[18:57:04 CDT(-0500)] <travis_84> Firefox, and IE have no working release
[18:57:12 CDT(-0500)] <Bosmon> Ok
[18:57:14 CDT(-0500)] <Bosmon> Chrome then
[18:58:05 CDT(-0500)] <travis_84> Chrome is buggy and weird, but still only video cam access in their nightly build
[18:58:50 CDT(-0500)] <travis_84> Oprah is supper easy and has a working release but only vid
[18:59:18 CDT(-0500)] <travis_84> they will likely release a working audio stream first
[18:59:21 CDT(-0500)] <Bosmon> Ok
[18:59:28 CDT(-0500)] <Bosmon> What could you actually do with a working audio stream?
[18:59:55 CDT(-0500)] <Bosmon> It seems to be that none of these alternatives are workable... and it might be best to fall back to the PhoneGap/Cordova model
[19:00:07 CDT(-0500)] <Bosmon> We can't afford to wait on the browser vendors, on this kind of timescale
[19:00:24 CDT(-0500)] <Bosmon> The project may easily end before they come up with something
[19:00:38 CDT(-0500)] <Bosmon> Chrome has currently spent 8 months resolving an API bug I reported.....
[19:01:34 CDT(-0500)] <Bosmon> I don't see that even if you were able to successfully issue getUserMedia for audio capture, it would take you very far towards what you need for a working interface?
[19:01:36 CDT(-0500)] <Bosmon> What is the plan there
[19:04:20 CDT(-0500)] <travis_84> Well it allows direct access to the microphone stream, as far as I see that can be dumped straight to any audio analyzer or speech recognition
[19:04:45 CDT(-0500)] <Bosmon> Well sure - but WHICH audio analyzer or speech recognition?
[19:04:48 CDT(-0500)] <Bosmon> And how could you dump it there?
[19:11:20 CDT(-0500)] <travis_84> I guess that is a good question. PhoneGap/Cordova is just a way to access a device's mic and sends the "WAV" to a "location" to be analyzed, i couldn't see if it offered anything beyond that other then methods to send and retrieve what it was returned
[19:12:19 CDT(-0500)] <travis_84> I was thinking I would have to translate working speech software like sphinx 4
[19:13:46 CDT(-0500)] <travis_84> to a JSON type solution since most working recognitions are all off-device; even Siri
[19:13:56 CDT(-0500)] <Bosmon> Well, this is what I mean by "taking the shortest route, to getting SOME acceptable app on SOME platform"
[19:14:00 CDT(-0500)] <Bosmon> It doesn't matter which one it is
[19:14:09 CDT(-0500)] <Bosmon> But it does need to happen within, say, the next 2 weeks
[19:14:42 CDT(-0500)] <Bosmon> Our original plan was to make a "punch-through" to a native platform app using a small amount of native code
[19:14:47 CDT(-0500)] <Bosmon> And to then integrate that with Cordova
[19:14:55 CDT(-0500)] <Bosmon> Android seems like a plausible choice for this
[19:16:15 CDT(-0500)] <Bosmon> The key thing is to be able to make ANY end-to-end demonstration of the app.... it doesn't matter how shaky or unportable it is
[19:16:25 CDT(-0500)] <Bosmon> And then to be able to put it in front of real people, and see if it is actually usable at all
[19:18:10 CDT(-0500)] <Bosmon> I would guess that neither getUserMedia, nor a Flash-based solution are a plausible route to this
[19:18:26 CDT(-0500)] <Bosmon> Since as far as I can see, neither of these actually resolve the issue of how to invoke a working speech API?
[19:20:06 CDT(-0500)] <travis_84> right, I was hoping I could get a working mic access on a desktop faster then it would take me to figure out how to setup a proper Android development platform and try and "quickly" learn how to work in that environment
[19:20:10 CDT(-0500)] <Bosmon> if we think this isn't workable in the next 2-3 weeks, we should abandon the whole speech angle entirely, and perhaps work an alternative model that relies on keyboard input and speech output only
[19:20:47 CDT(-0500)] <Bosmon> As far as I can see, even on the desktop, you don't have any alternative to a solution involving some amount of native code
[19:20:55 CDT(-0500)] <Bosmon> Which points to the Cordova angle even there.....
[19:20:56 CDT(-0500)] <travis_84> WAMI has actual demonstrations of speech learning games
[19:21:33 CDT(-0500)] <travis_84> so it can do it and there are speech APIs for it
[19:22:27 CDT(-0500)] <travis_84> I just need past this small initial snag to start analyzing the audio through JS
[19:22:30 CDT(-0500)] <Bosmon> Ok, I see
[19:22:39 CDT(-0500)] <Bosmon> So this isn't really the "fallback" this is the "only workable strategy" : P
[19:23:29 CDT(-0500)] <Bosmon> Assuming it actually works
[19:26:44 CDT(-0500)] <travis_84> By fallback I mean to CREATE the stream it relies on flash, when getUserMedia works it will create it that way. All either of these do is create an audio object to work with.
[19:27:19 CDT(-0500)] <Bosmon> But it seems there's no alternative to the presence of Flash code to actually process the stream using WAMI
[19:27:34 CDT(-0500)] <travis_84> let me find the link to a youtube vid of WAMI working all on voice
[19:27:56 CDT(-0500)] <Bosmon> So it seems that going with this approach, may as well bite the bullet and make Flash the "mainstream" strategy
[19:28:08 CDT(-0500)] <Bosmon> Are there any advantages to using getUserMedia to create the stream instead?
[19:29:13 CDT(-0500)] <Bosmon> And - what is the snag you are seeing?
[19:30:01 CDT(-0500)] <travis_84> yes, no plugin's would be needed to access the mic. it would be all browser native
[19:30:13 CDT(-0500)] <Bosmon> Well - it couldn't be!
[19:30:17 CDT(-0500)] <Bosmon> How could it analyse what was said
[19:30:45 CDT(-0500)] <Bosmon> I can't see any way that Flash wouldn't be obligate, on the desktop configuration of WAMI
[19:31:23 CDT(-0500)] <Bosmon> Oh, gah
[19:31:27 CDT(-0500)] <Bosmon> It sends it to the server!
[19:31:49 CDT(-0500)] <Bosmon> Sorry, I am so behindhand with reading and understanding the docs here....
[19:32:02 CDT(-0500)] <travis_84> we are only using flash to turn the mic into a JS manipulatable object, nothing more
[19:34:10 CDT(-0500)] <travis_84> like I said as far as I have read, ALL speech recog's are sent to servers. Even Android...as far as I know.
[19:36:33 CDT(-0500)] <Bosmon> Ok - glad we are getting these details sorted out
[19:36:58 CDT(-0500)] <travis_84> http://www.youtube.com/watch?v=Ceee1wBfqec&feature=player_embedded
[19:37:08 CDT(-0500)] <Bosmon> So, let's see what the shortest route is to any kind of working platform
[19:37:28 CDT(-0500)] <Bosmon> What's the main snag with your WAMI-Flash system?
[19:37:31 CDT(-0500)] <travis_84> this uses wami to make a game to learn language
[19:39:09 CDT(-0500)] <travis_84> I downloaded the files from https://code.google.com/p/wami-recorder/
[19:40:13 CDT(-0500)] <travis_84> but the flash permissions panel will not show up as it does intheir demonstration
[19:40:31 CDT(-0500)] <Bosmon> Ah, interesting
[19:40:36 CDT(-0500)] <Bosmon> How are you hosting the files?
[19:40:45 CDT(-0500)] <Bosmon> It sounds like there is an issue that this must be done with some kind of genuine server
[19:42:25 CDT(-0500)] <travis_84> side note, we do have the opertunity to switch to an eye-tracking since I have working access to the cam already.. and I am know there is face tracking in JS already..
[19:42:38 CDT(-0500)] <Bosmon> Well, sure
[19:42:45 CDT(-0500)] <Bosmon> But let's try to work with 1 technology at a time : P
[19:44:28 CDT(-0500)] <travis_84> yeah, I was just thinking if this speech part is not attainable right now
[19:44:39 CDT(-0500)] <travis_84> the pivot would be easy
[19:45:13 CDT(-0500)] <Bosmon> My worry with eye-tracking is that it might end up to just being a proxy for some kind of pointing device
[19:45:31 CDT(-0500)] <Bosmon> Speech input, or the keyboard, would more thoroughly work out the "non-visual" possibilities for the idiom
[19:46:02 CDT(-0500)] <travis_84> the Wami 2.0 does say it needs a server, but the google code site doesn't say that
[19:46:34 CDT(-0500)] <Bosmon> The google code site does have this section: "If you want to collect audio from the browser, there is no getting around the need to host your own server."
[19:46:57 CDT(-0500)] <Bosmon> How have you been hosting the files so far, that you got from the code site?
[19:49:01 CDT(-0500)] <travis_84> ahh... I didn't see that. I have been hosting locally. But the server is needed only for the audio not the flash permissions right?
[19:49:21 CDT(-0500)] <Bosmon> I have a feeling that the flash permissions may also be related to the hosting
[19:49:31 CDT(-0500)] <Bosmon> Flash is endlessly awkward that way
[19:49:44 CDT(-0500)] <Bosmon> The permissions, for example, end up being encoded as the permissions for SOME DOMAIN
[19:49:47 CDT(-0500)] <travis_84> grr
[19:49:52 CDT(-0500)] <Bosmon> And so if there is no domain, it may well not show any permissions
[19:50:10 CDT(-0500)] <Bosmon> Have you been using Apache on localhost, or what?
[19:51:31 CDT(-0500)] <travis_84> hmm I could try a WAMP test and see if I can spoof it
[19:51:44 CDT(-0500)] <Bosmon> Yes
[19:51:47 CDT(-0500)] <Bosmon> that's an easy option
[19:52:04 CDT(-0500)] <Bosmon> Keep it on localhost, and then just make an entry in etc/hosts
[19:52:43 CDT(-0500)] <Bosmon> Or windows equivalent thereof, I forget which your favorite platform is
[19:56:06 CDT(-0500)] <travis_84> I'll have to try that Sunday. I swear I saw a server-free flash option. I didn't want to be forced to install my Flash Pro CS5...
[19:56:40 CDT(-0500)] <Bosmon> I don't think you should need that
[19:57:00 CDT(-0500)] <Bosmon> Isn't just sticking it on some static page in an Apache sufficient?
[19:57:42 CDT(-0500)] <Bosmon> At least just to invoke it....
[19:58:08 CDT(-0500)] <Bosmon> OK well, it looks like you need an active server to receive the media stream.... pretty bizarre
[19:58:31 CDT(-0500)] <Bosmon> https://code.google.com/p/wami-recorder/source/browse/example/client/recorder.js
[19:58:36 CDT(-0500)] <Bosmon> How about this code?
[19:58:42 CDT(-0500)] <Bosmon> It appears to require nothing more than SWFObject
[19:59:38 CDT(-0500)] <travis_84> http://www.jordansthings.com/blog/?p=5
[20:00:21 CDT(-0500)] <Bosmon> Seems to work for me
[20:00:52 CDT(-0500)] <Bosmon> But it seems that this recorder.js code is capable of relaying the Flash stream to WAMI without an extra trip to the server?
[20:00:53 CDT(-0500)] <travis_84> mic to mp3 but he says it can be changed to just stream all locally
[20:01:05 CDT(-0500)] <Bosmon> Yes, it probably can
[20:01:15 CDT(-0500)] <travis_84> I just thought wami would be easier
[20:01:17 CDT(-0500)] <Bosmon> But I would guess that the permissions issue would still require "locally" to be via a page with a genuine domain
[20:02:32 CDT(-0500)] <Bosmon> Try out the recorder.js with the spoofing option on Sunday, and let me know what happens
[20:04:22 CDT(-0500)] <travis_84> ok will do... I need to eat at some point today lol
[20:04:39 CDT(-0500)] <travis_84> been a long day