Searching for information: If I were Neeva

Almost from its start, Google Search has been regraded as best-in-class, delivering more relevant results to both general and specific queries than competitors. Maintaining this relevance and dominance has required large and ongoing investments in the search algorithm and web indexers, investments recouped thorough the use of user data for targeted advertising. In this, Google is solving two simultaneous search problems: users searching for information they don’t know, and advertisers searching for users who may purchase their products.

Many users are disconcerted by this (~anonymized) use of their search history to display ads, motivating various competitors to Google Search. DuckDuckGo, for instance, also sells advertising space in search listings, but they don’t use user information in ad display. Lacking Google’s profitability and scale their results are less relevant, but sufficiently for many users. Startpage is similarly motivated, paying Google to display Goggle search results while showing generic, not user-specific, Google ads. In this they’re like a pure Google Search, one that isn’t trying to maximize profit or pay for bad ideas.

In this vein, I was interested to see the launch of Neeva as a ‘subscription search’ service, paying for Google Search results without ads or (Google) tracking. While some people will be attracted to this, it seems like a minor differentiator, as the primary user experience is still delivered entirely by Google, one subject to cancellation at Google’s whim.

If I were to start a search company, I’d start by recognizing that many individual searches are not successful (1) because either the user doesn’t know what they’re actually searching for (2) or how to phrase the question in sufficiently specific way (3). I’d focus on the value of questions: searching for information is valuable because it is an attempt to take what I know and learn something else.

That is to say that while I’m sometimes scatterbrained (*multitasking), I rarely conduct multiple, unrelated searches at the same time. Successive searches, observation (1), are an indication of the failure of the prior searches to produce a sufficient answer, yet Google and every other search I’ve seen neglect this temporal aspect continuing to show results you’ve already rejected. Most of this is due to the too-simple interface of text box + contextualized hyperlinks, preventing search from attempting to group results by their textual similarity, vintage, or domain. It’s nice if the answer can be in the first three results, but that is often not the case and search should help us better understand how various results compare to each other. This naturally leads to a longer interaction with the search results — the thing Google pays to serve — as the user is encouraged to see how the results differ and through a click or two to delve into the differences before leaving the search engine. And here’s the critical point: if the search engine can show the user the point at which it cannot determine which of two groups of results is more responsive to the users query, that is precisely the limitation of the search engine and the kind of information the search engine needs to learn to serve better results. Keeping users on the search page for that initial screening allows the collection of data to make search better. Not to sell ads, but to make search better!

Building off (1), (2) realizes that many searches have as their target a vague notion, something that may be only half remembered or a conjecture the user is looking to verify or disambiguate. Here, the search limitation of one query = one set of results is limiting, as multiple queries might be more helpful in circumlocuating to what the user is actually asking. Google attempts this with the ‘other users searched for’, but these suggestions are almost always entirely different questions. When the user hasn’t made a sufficiently specific query, the thing to do is to ask them for more context, not suggest a different question. In recognizing when the results are insufficiently specific and facilitating this narrowing, the search engine is again able to gather more information to more directly answer the user’s question, again making search better.

In the other direction, (3), some searches are nearly impossible. Searching for anything to do with Microsoft Excel, for instance, invariably returns pages of low-value clutter. Google search provides no ability to filter out SEO clickbait nor any facility to distinguish between the needs of beginning and advanced users. The problem isn’t in making the query exact but specifying the kinds of resources to exclude in a way the search engine understands. Now, Google and other engines have long offered boolean keywords or the ability to negate (-) a search term, but because of the interface it can be challenging to verify that the query has been parsed correctly. Just as rephrasing a question before answering communicates understanding of the question, search engines should attempt to communicate to the user that their pages of results are not just wasting the user’s time.

So if I were Neeva (or DuckDuckGo since they’ve written their own search engine), I’d attempt to make search a better experience, by providing better results and context than Google does. I’d classify users according to their search history/sophistication and prefer results that other users in their class have found helpful, while at the same time making clear that the results are tailored based on their search history and offering a way to remove the class restriction. Search that uses my and other users search history to become better, in a transparent way, that’s what I’d subscribe to.

If I Were: Arm&Hammer

‘Fresh box for baking!’
100tsp/box

..I’d sell baking soda in 1 teaspoon packages.   They already realize the need (‘Fresh box for baking!’) but choose to sell 100tsp per container for $0.94.  Other parts of the box tell me to change boxes every month, so, according to Arm&Hammer, 95% (5tsp = 5 batches of cookies / mo) of the product they sell to customers is intended to be wasted.  I’ll gladly pay $1/5tsp/mo to have a less self-contradicting package!

If I Were: Smuckers

https://images-na.ssl-images-amazon.com/images/I/91ibjd6JSLL._SY606_.jpg
Smucker’s Grape Jelly

Seriously. Sell jelly in squeeze tubes. Just be done with it already.  OK, get fancy and sell PB&J in a dual-tube.  Tube packaging efficiency is so much better, and, if you do the nozzle right, you don’t need to refrigerate because no air will enter (see Franzia/bagged wine).  TetraPak and others have this figured out; why are we still discussing this?

If I Were: Macy’s/JCPenny

The first way to view the ongoing struggles of department stores like Macy’s, JC Penny, Yonkers, and others is by likening them to their internet adversaries. From this vantage we see their hip, sprawling, prime-retail stores as grossly inefficient warehouses, bleeding margin on

  • customer acquisition: advertising to get people in the door
  • labor: presentation, refolding, cashiers, and cleaning
  • storefront: cost/sqft to be in the (strip)mall

These costs recur, so that for any item we can imagine (and if we had the data could calculate) the daily cost to sell that item, or similarly view how every additional day on the shelf further decreases the potential profit on that item.  Now, the whole idea of department stores is that you can use profits in one seasonal department to offset losses in others, so this is simplistic, but it communicates the basic challenge of retail.

One way to move more product and offset the cost of physical retail is to also sell online, using your brand loyalty to compete directly with the online-only retailers.  In some cases this loyalty can sustain higher prices, but in many cases it seems that department stores must price-match online-only retailers product-for-product.  And since the department stores’ cost of inventory is higher than online-onlys’, they must accept reduced profits. (Many were able to make up this loss by their close relationships with leading brands, potentially giving them access to wider product variety, better product targeting to regional stores, and likely better terms.)

But people still shop for clothes in-person, suggesting that the stores provide some value that they’re not capturing today.  So the second view on department store struggles is their historic value proposition of convenience, selection, and reasonable cost/frequent sales.  With retail items costing slightly higher than online, we’re left with an immediacy that next day shipping can’t quite match and a product selection that, while decreased in breadth from online, can be classified and filtered to a much greater extent by personal criteria.

Given this, I wonder how a showcasing model would change these underlying businesses.  Instead of selling customers in-store items, the retailer should prefer, say, two-day shipping from the regional distribution over the depletion of the in-store inventory.  I think the retailer has a choice as between selling an in-store unit with 50% of the original margin remaining versus inventory from a more cost-efficient warehouse where, say, 95% of the original margin remains.  If ship-to-home is the default, advertised-sale price, the retailer could still sell in-store items at slight $5/5% markup, as a soft preference for selling items from the retailer’s most efficient units.  Moreover, by shifting the retailer’s distribution strategy from inventory-on-shelves to more efficient warehouses, they might better compete with online retailers by aping their efficiencies–in no world does it make sense to expose 5 identical products of every single item to customers, this is just an artifact of the era when the store was the warehouse.

So, if I were Macy’s/JCPenny, I’d seal the deal in person and deliver in two days.

debator//debater

A machine to help us communicate…

The last time I had free nights and weekends, 2011, our country was growing predictably chipper over the coming election, so I spent some of that free time considering debaterdebator.

The idea was simple: build a machine to help us communicate. I wondered whether personal relationships could be leveraged to draw friends into better discussions on political things, whether that discussion might benefit from ready access to conversation aides, and if the participants could be encouraged beyond the cable news/talk radio and red/blue strawmen. Fundamentally, I thought (and think), that people desire increased good for their selves, family, friends, and broader culture, that they would find it disconcerting to disagree with people they already trust and interact with, and that these relationships would have the best chance of drawing people together, into some greater understanding of and respect for our mutual interests and individual concerns.

Now, this idea struck some friends as obviously bad; that once you start down the partisan debate path, forever will it taint the relationship. I think that’s both wrong and unfortunate.  It’s unfortunate because it is the willful maintenance of a veneer, a retreat from true, honest conversation and, actually, a diminution of the friendship.  That’s Facebook, that’s the echo chamber we have today.  Instead, the (benevolent!) platform could strategically choose topics for conversation and then support the users in the formulation and conduct of a discussion.

So, in the case that uncle Jimbo had posted/tweeted articles and messages critical of the latest IPCC report on climate change, and nephew Jimmy the converse, our platform would begin with their relationship and detect that the article content and audience graph between these two relatives only minimally overlapped.  From there it would have prompted each person to review the other’s endorsements and encourage them to pose questions to each other.  Since neither participant is an expert in climate change, the platform would use the same article graph to find and suggest bridge articles that appeared to span the difference in perspective and might serve as a basis for shared knowledge and increased accord.  By encouraging each person to question the other (filtering, at least crudely, against attacks) within the context of a shared set of information, I hoped that they would come to a better understanding of each other’s perspective, their own, their culture, and their world.  And this would be valuable to them, to our society, and to the many interested parties.

I believed this interaction was enabled by reasoning over social platforms and the simple encouraging of people to seek explanations for their positions and, in citing them, attempt to defend their sources and arguments against those mustered charitably by their friend.  So, the platform should be a quest to discover truth, at first between two friends and likely reaching a trivial depth, but it had the potential to stoke something other than partisan, tribal cynicism, which we have far too much of.

I believed a this system was technologically possible 5 years ago (2011); it is even more the case today, and this is slowly being realized in a predictably breathless manner:

Those all referred to Facebook, and indeed their seat high atop Mt. Data enables them to rain benefits and harms on those below according to, essentially, their Ferengi whim.

While there are rational fears about the power of big-data (with the right analysis Facebook can create knowledge, and hence powers, that no other entity save the NSA can approach), it need not be exercised so insidiously.  I very much believe technological advances are tools that we can choose to apply for our benefit; so the contrast between what Facebook is doing and what I wanted to do is the user’s choice.  Facebook wants to drive greater engagement to expose their users to more ads, to encourage consumerism.  The problem is that Facebook is not helping people; it seeks to capitalize on their preferences, fears, and weaknesses, rather than aid their discovery, growth, and participation in the world.  It and many other platforms today provide a useful service, but they could do so much more.

Solar Trains < Solar Railways

When I first saw the headline “India’s first solar-powered train makes its debut,” I envisioned something that actually grappled with their substantial energy requirements and somehow leveraged their uniquely distributed infrastructure.  But, instead, the described pilot project puts solar panels on passenger cabin roofs for climate control.  So, small potatoes.

But recalling 2014’s Solar (freakin’) Roadways (mostly a bad idea), the better idea is to create solar railways, like:

Simply take the solar panels from the moving train and install them between the tracks.

I don’t want to write a long post, so some bullets in favor of this idea…

  • generated power isn’t moving, so it’s easier to efficiently feed into the railway/municipal grid (the power density of liquid fuels is largely unrivaled by any sort of storage, so I’d bet it is better, on the whole, to optimize trains for the efficient use of energy than to dictate what source it comes from, let the electricity go to wherever it can be best consumed)
  • trains aren’t made heavier from the panels, or substantially modified (though, given that most freight trains are diesel generator feeding electric traction, it would be cool if tracks had an electrified rail for mountain climbs and descents)
  • rights-of-way and site-prep are minimal, need only design the panels to flex with or be isolated from train deformations
  • panels are typically exposed to the sun along rural stretches
  • panels would be cleared of debris by the regular passing of trains and kept free of encroaching weeds/branches by the same
  • train-ground drag would be reduced by their smoother surface
  • a distributed source of power for many rural uses

and some limitations

  • panels are not angled to the sun (a Fresnel lens built into the glass protector could reduce losses with latitude)
  • total collection area is limited to very long, thin sections directly beneath or adjacent to the rail
  • these long, thin collection areas would require longer power transmissions than the same collection area would otherwise require (unless the rails themselves can be used as em waveguides–antennae–and efficiently)
  • the installation is not secured, theft/tampering more of a challenge than with other rail infrastructure

I’m sure I’ve missed some attributes, comment if interested.  I think the transmission issue is the most limiting, though while briefly searching about rails-as-waveguides, I saw this article about railway electrification…as it says, this is so obvious I’m a bit amazed it hasn’t already occurred.  So, maybe we can electrify rail corridors and install some generation in that developed but otherwise unused land; sure seems better than a few panels powering the AC.

A Longer View On Academic Publishing Platforms And Innovation

TLDR: Patentese values superficial bets on the future of technology and society, to the detriment of technology and society.

I’ve been watching the relationship between researchers, publishing platforms, and IP evolve with some interest. To choose some examples:

What technology will the next Amazon/Google/Facebook/Uber deploy?  Today’s unicorns consistently apply a novel assortment of known technologies to some large market beset by some inefficiency.  In the absence of oracles, the next unicorns are guessable.  If you can observe the pace of innovation in any particular field — say the number of publications per year — you quickly learn what technology sectors are interesting to academics and also receiving funding.  If you see this rate of innovation increase, that signals that something new has occurred.  And because researchers and funding agencies like to be fashionable, a quick semantic similarity analysis across the literature in that increase will give some sense of what the excitement is about.

Most researchers want to make ‘life’ better, and most people are happy to pay for a better life.  Since researchers are generally only excited by progress towards their discipline’s goals, and since those researchers inhabit the same reality as their eventual consumers, what excites researchers will probably, eventually, ideally, be valuable to society.  So, if you notice increased innovation in some sector and realize that, by the nature of that sector, growth will impact many, well that’s worth paying attention to.

Having some intuition, it’s time to place bets.  The form of the bet would differ by institution: funding agencies can target for some desired effect (say, Rep. Smith) and trolls could weight their acquisitions by maturity and potential scope (today’s trolls as amateurs).  So, such a learned-intuition technology is agnostic to the ends (as ever), but, the actors differ greatly in their ability to amass the underlying database, and in the rewards for applying it.

Who is positioned to leverage the learned-intuitions? Funding agencies are doubly disadvantaged: grant reporting is sporadic and does not approach the rigor of (generally privately-held) peer-reviewed journals, while the ends will always be subject to the shifting winds of bureaucratic debate.  (Moves toward open access, data, and analyses would remedy both of these, though they may be stymied lobbying.)  With the government’s manifest inability develop and apply new technologies, pessimism is warranted.  Certain, other entities are advantaged as this relatively cheap method can extract more value out of already-held repositories.  I picture this as a(n) hourglass, where fields of research may be analyzed (both semantically and in the author/referenced/viewer graph) to identify emerging trends.  This is the broad, upper funnel.  The observed trends may be entered into patents, where the patentese (the stilted language encountered in patents) can hide the absence of a reduced-to-practice, coherently-understood innovation.  This is the hourglass’ neck, with one ideal being a single patent that draws inspiration from many observations and thereby is able to make broad claims across products (the hourglass’ expanding lower chamber).  A form of this speculative patenting occurs in many university tech-transfer offices today, who grasp at any IP in a projected-to-be-sexy market, but is greatly improved by the intuition wrung from a large database.

The problem, the social harm, is that the standard of proof differs between the literature, patents, and the market, and so do the rewards.  Researchers, ‘the literature,’ broadly value interesting, well-posed, and thoroughly-explained experiments.  New works are valued by their novelty, by the degree to, and manner in which they solve the associated problem…and not by their sweeping claims.  Good work is not immediately and individually rewarded, but appreciated in aggregate through greater grant success and honorariums.  The market has a related interest in things that work, that solve the consumer’s problem…and little patience for those that do not.  It does not reward ideas but their execution, and there is a federal agency to restrict claims to reality (the FTC in its consumer protection role).  Between the researcher and market lie patents; on one side patents draw inspiration from disparate developments and on the other they seek to claim parentage of broad swaths of future products.  The reward is (US, typically) a 20 year monopoly over all possible renditions of the claimed idea, far beyond that which was realized during the original application.  Whereas both the literature and the market incent specificity, the patent system incents vagueness.

Combining the aggregation of the academic literature into large databases in machine-readable forms with big-data analyses can* yield patentable claims. While there are probably big-data ways of evaluating market potential for determining the risk of particular claims, I suspect researcher interest to be a good proxy. (At least in some domains, as great interest and excitement in the newest particle or complexity does not a market indicate.) I imagine the typical result to be patents like Myriad’s BRCA-1 test or the CRISPRs; things that are close to the literature dressed up in patent language. The patent application is not going to be reviewed for actual utility (as FDA does) nor is the patent examiner going to verify that the claims are possible, only that they are plausible given the state of knowledge.  As the burdens and perverse incentives (see the paper) of the examiners are widely known, entities might craft patent applications whose background summary and prior art are not representative of the literature but tilted to their benefit.  Again, it does not matter (to the applicant) whether the claimed innovation actually functions, but only that it appears plausible. The risk of discovering this (potential) impracticality is reduced by the patent thicket, where the number of granted patents is more important than their quality (courts are similarly burdened in testing the leveraged claims).

Without reforming the incentives, rewards, and norms of the (US) patent system, I fear that they will become an even larger vehicle for rent-seeking. Who’s the master of these databases, and really, the knowledge they contain? For, as I mentioned, the semantic analysis of the literature may be put to other, more socially-useful purposes. It is important to remember that the purpose of patents is “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;” the very advance of technology has rendered sub-optimal the current form of the patent system. I am not against patents, but I do ask that they be useful.

*an assertion I think true, whether this is possible today or tomorrow is debatable

LED Christmas Lights

For any given product, it is always interesting to see which aspects are improved and which languish.  Engadget’s recommendation of the best LED and incandescent Christmas lights highlighted this; the difference between the LED and incandescent strands is almost entirely restricted to the bulb (and associated electronic driver).

LED Christmas light (top) versus incandescent (bottom)

Looking carefully at the above picture, the incandescent bulb can be removed from the socket while the continuous mold line (lying exactly between the socket’s two flat sides) suggests that the LED bulb cannot be removed.  Assuming this, I’d wager that the LED bulb is first connected to the wires and then the socket molded around this connection.  The socket length and design, then, serve no functional purpose beyond fulfilling the consumer’s expectation.

A simple machine to make LED Christmas lights. From left; a spool of multistrand, multiconductor wire passes between two vertical-alignment rollers, through a wire-puller (drive), and into the LED inserter. LEDs fall from the vertical hopper into a bit and are driven by the pneumatic cylinder into the wire, such that their leads pierce the wire and making electrical contact. Exiting the wire, the leads are bent against the wire in the same manner as a stapler curls the staple on the back of the paper. This prototype does not encase the LEDs, so after the press the light strand is wound around a spool for packaging.

LEDs can be inserted directly into the wire, then encapsulated in plastic/rubber for electrical isolation.

LED Christmas lights were just coming to the market during my senior year in high school.  My senior project focused on the attachment of the bulb to the wire, where I realized that the increase in bulb quality (incandescent to LED) and associated decrease in bulb failure lessened the need for consumer-replaceable bulbs.  So, I designed a light strand where the LEDs were directly inserted into the wire and also a machine to construct these strands.

Removing the socket results in a more compact light strand which should be cheaper to produce (less material and elimination of a dedicated electrical assembly) and less visually-intrusive because the ‘socket’ has been substantially reduced (Christmas light strands are green to blend in with the tree).  Electrical contact is maintained without soldering by the compliance of the wire, much as a nail driven into wood is retained by forces from the compressed fibers.

I’ve learned much in the ten years since this project, but this idea remains relevant and would be fun to revisit.  I built this project in the context of the Szmanda science scholarship, so my paper and presentation highlight the energy efficiency of LED lights against traditional incandescents:

Electron Beam FreeForm Fabrication — IN SPACE

In the last post I described the basics Electron Beam FreeForm Fabrication (EBF3), here’s why I’m excited about it:

Let’s walk through this process:

Metallic Refuse

Life and research aboard the ISS requires a lot of supplies and results in good amounts of waste.  This is the most expensive garbage in the world, and, due to the restricted lab and living space, includes completed experiments and spent supply ships along with the more obvious packaging, clothing, food, and other waste. Given the nature of space exploration, this waste components of this waste are known absolutely and excellent candidates for in-orbit recycling.  Used Progress and other supply ships, having arrived at station, could likely be stripped of components and structures that are not required for their reentry garbage truck function and again recycled into new components and structures.  Though accompanied by greater risk, ISS (or some other manned or unmanned station) could also serve as a destination for end-of-life satellites as the only place where there residual in-orbit material value may be captured.

Garbage, Meet Recycler

If you introduce a metal into an electric field sufficient to overcome the intermetallic bonds, those bonds will break, freeing electrically-charged ions from the donor.  This plasmification is the basis for vacuum deposition, but what if the donor is not a pure metal but rather some alloy?  What if the donor is something like the aluminized mylar found in space (-age) blankets?

The second part of this step is an external electromagnetic field, as commonly found in mass spectrometers.  If the plasma is accelerated by an electric field and then encounters a magnetic field, the ions will arc according to the strength of the field and their mass.  With the electric and magnetic fields coarsely tuned according to the known properties of the garbage, its component atoms can be sorted into atomically-pure stacks.

Sorted Feedstock

These atomically-pure stacks are highly valuable, due to their purity and location in earth orbit, as long as there is a process by which they can be made into something new.

Feedstock, Meet Printer

The same combination of electric and magnetic fields used to recycle garbage can be 3D printed into new components and structures.  By selectively introducing atomically-pure feedstock into the same electron beam used for plasmification and guiding the plasma via the same magnetic field, a part could be build layer-upon-layer.  This is essentially EBF3, though instead of a translating build platform the platform could be stationary and the beam scanned across the part by varying the magnetic fields.  (Though for alloying a translating stage or translating emitter might be required…)

3D Printed, Variable Alloy Components…In Space

3D printing metallic components in space would be a game changer; it would allow recycling of substantial fractions of today’s orbital garbage into new components that equal or rival their terrestrially-produced counterparts.  Further, the cycle described could also be applied to asteroidal and other in-space resources.  I don’t know what technology Deep Space Industries envisions…

…but I can’t see why EBF3 would not meet their needs.

Finally,

I’ve spent 500 words describing this concept, but it seems to be worth much more study.  While the individual elements of the described cycle exist terrestrially (and mass spectrometry has been used on many robotic space missions) they have not been integrated into a single apparatus.

Many questions accompany this concept; I hope to explore some of these going forward (as posts, and perhaps more formally), and, more than that, answer why MadeInSpace is on the ISS rather this…

Electron Beam FreeForm Fabrication (EBF3)

There are many cool things happening in 3D printing these days, but the technique I’m most excited about, electron beam freeform fabrication (EBF3), has received very little coverage.  So in this and following posts, I want to describe the basics of this technique and some of the cases where I think it is the ideal manufacturing technology.

Printing in plastic is easy.  Heat some PLA or ABS to 300-400F and squirt it out of a small nozzel while tracing the outlines of your part.  Alternately, selectively shine a UV light source on some UV-cure epoxy and you have a stereolithography machine.  These two techniques, finally free from patent protection, are responsible for virtually all of the media buzz in 3D printing.

While these technologies accomplish the basic aim of converting a CAD design into a dimensional prototype, few of these additively-produced prototypes can withstand loadings similar to those a traditionally-machined part (even when machined from the same plastic, let alone metal versus printed plastic).  Not every application needs this durability, but it is the greatest limitation of every 3D printer you’ve probably heard of.

Printing in metal is expensive; in contrast to the great variety of Kickstarted $300-3,000 consumer/prosumer printers, MatterFab made news this past summer with the announcement of a metal-printer targeted at $100,000.  This printer, and it’s million-dollar-plus competitors, uses a kilowatt-class laser to melt particles in a metal powder together, forming a solid part.  Depending on the scan speed, laser intensity, and material addition rate, this method (referred to as laser-engineered net shaping – LENS – and metal laser sintering) can produce fully-dense parts with material properties similar to those of cast or annealed parts.  Since melting the metallic powder depends on the relationship between the laser wavelength and intensity and the powder’s melting point and absorbtivity, machine cost and material selection are closely related.  Common configurations have difficulty producing aluminum, titanium-aluminide, tungsten, magnetic alloys, and others.   These difficulties are easily explained by considering the reflectivity of some common metals versus common laser wavelengths:

Reflectivity of common metals and common laser wavelengths. HPLD stands for high-powered laser diode. Chart courtesy Kennedy, Byrne, and Collins, 2004.
Reflectivity of common metals and common laser wavelengths. HPLD stands for high-powered laser diode. Chart courtesy Kennedy, Byrne, and Collins, 2004.

Similar to LENS, Electron Beam Freeform Fabrication (EBF3) directly melts metallic materials to form a fully dense part, though using an electron beam rather than a laser. EBF3 commonly uses a stationary electron beam and a multi-degree-of-freedom positioning system to build parts layer-by-layer. As shown below, the electron beam is focused at a particular point, melting any co-located materials. Introducing new material into this region – by a wire feeder – increases the volume of this pool. Indexing the positioning system causes the pool to move, leaving behind newly deposited material. Adding a second wire feeder enables in-pool alloying and the production of functional gradients (varying the alloy along the part). Most EBF3 systems operate inside a vacuum chamber to both prevent the surrounding environment from attenuating the electron beam, which also eliminate the prospect of part contamination.

Left: Schematic representation of electron beam freeform fabrication (EBF3), courtesy Taminger & Hafley, 2008. Right: An EBF3 machine at NASA Langley, courtesy of Bird & Hibberd, 2009.
Left: Schematic representation of electron beam freeform fabrication (EBF3), courtesy Taminger & Hafley, 2008. Right: An EBF3 machine at NASA Langley, courtesy of Bird & Hibberd, 2009.

Along with the prospect of metal-agnostic (or more so than LENS), studies from an EBF3 research group at NASA Langley indicate that resulting parts are stronger than wrought and tempered alloys:

Comparison of EBF3-produced Al 2219 to Al sheet and plate from Taminger & Hafley, 2008. ‘Typical’ refers to conventionally-produced wrought and tempered sheet and plate properties. Of note, the ‘As-deposited’ specimen has greater strength than the wrought and a T62-tempered EBF3 deposit outperforms a conventional T62 alloy.
Comparison of EBF3-produced Al 2219 to Al sheet and plate from Taminger & Hafley, 2008. ‘Typical’ refers to conventionally-produced wrought and tempered sheet and plate properties. Of note, the ‘As-deposited’ specimen has greater strength than the wrought and a T62-tempered EBF3 deposit outperforms a conventional T62 alloy.

In addition to producing parts with commendable material strength, EBF3 is a fast process. Able to trade resolution for speed, EBF3 has been demonstrated at deposition rates of 178 to 594 cm3/hr (11-36 in3/hr) in Al 2219 and 434 cm3/hr (26.5 in3/hr) in Ti-6-4 [Taminger & Hafley, 2008]. As a point of comparison, a representative laser-based system deposits at 8 to 33 cm3/hr (0.5 – 2 in3/hr) [Taminger & Hafley, 2010].  The electron beam is also more efficient at delivering energy to melt pool, at approximately 95%, than a laser process, which might see 10% efficiency due to losses in the laser, beam transmission losses, and the naturally high reflectivity of most metals [Taminger & Hafley, 2010].

According to Lori Garver (NASA Deputy Administrator through 2013), EBF3 is used in fabricating the titanium spars for use in the F-35 Joint Strike Fighter; some more mundane results are below:

Parts produced in Taminger & Hafley, 2008: a) a TI-6-4 wind tunnel model, b) a square box of Al 2219, c) an Al 2219 airfoil, d) an Al 2219 mixer nozzle, e) an Al 2219 converging/diverging nozzle, f) a Ti-6-4 guy wire fitting, g) a Ti-6-4 inlet duct, and h) a Ti-6-4 truss node.
Parts produced in Taminger & Hafley, 2008: a) a TI-6-4 wind tunnel model, b) a square box of Al 2219, c) an Al 2219 airfoil, d) an Al 2219 mixer nozzle, e) an Al 2219 converging/diverging nozzle, f) a Ti-6-4 guy wire fitting, g) a Ti-6-4 inlet duct, and h) a Ti-6-4 truss node.

The significant disadvantage of EBF3 is poorer control of the part surface quality than plastic and LENS printers. EBF3 part resolution is essentially limited by the feed wire diameter, but this diameter dependence has not been demonstrated in the literature.  Given the commercial availability of LENS techniques, the majority of the community has focused on understanding EBF3 and its unique alloying ability.  EBF3‘s selling point of printing with high strength alloys places the focus on accurate alloy production; applications demanding these alloys are sufficiently advanced (and costly) to delay interest in higher resolution.

EBF3 also requires an evacuated build environment, on the order of 1×10-4 Torr, adding an appreciable degree of complexity to any EBF3 (terrestrial) system [Taminger & Hafley, 2008].  Davé’s original 1995 description mentions that use of a high-energy electron beam (>500keV) can eliminate the need for vacuum, though such a device will be accompanied by its own complexities in generating large potentials. The literature has apparently not yet considered this variation.

Producing spars for the F35 is nice, but to me the killer application for EBF3 is not terrestrial, but in-space.  In the next post I’ll lay out why I think EBF3 is the ideal in-space manufacturing technology.