Sunday, October 14, 2007

Are Good Security and Web 2.0 Incompatible?

There are far too many numbers in this TechNewsWorld story, which reports results of a survey on Web 2.0 threats conducted by Forrester Research for Secure Computing. The survey, released in conjunction with the introduction of the vendor’s Secure Web 2.0 Anti-Threat (SWAT) initiative, shows that IT folks are unaware, untrained and don’t have consistent policies for this dangerous and increasingly popular way to use the Internet. SWAT aims to raise awareness of the issues, offer tips and in other ways help companies protect themselves.

The story quickly devolves into a sea of percentages. The saving grace is that the big picture is aptly summed up by Ken Rutsky, Secure’s executive vice president of product marketing:


The report reveals a security blind spot. Some 90 percent of enterprise organizations are still deploying security measures designed for the last generation of attacks.

This Computerworld piece uses data — thankfully, more selectively — from what appears to be a different Forrester survey. The piece focuses on the initial reluctance, and now apparent grudging acceptance, of Web 2.0 by IT. Like wireless and other emerging technologies, IT ultimately must bend simply because the folks they serve are using the new approach.

The piece features several short and interesting vignettes on different companies’ approaches and offers eight steps for Web 2.0 proponents to take in order to implement a secure and beneficial platform. They should create awareness; find supporters in the company; get IT on their side and present a proposal to senior management. Web 2.0 fans also should work closely with business units; compile and distribute best practices; resist the urge to force adoption and be patient. IT, for its part, is well advised to seek and create alliances with Web 2.0 proponents who are wise enough to take up these procedures.

In a related story, Computerworld reports on comments by Christian Christiansen, an IDC analyst, at a recent Kaspersky Lab’s conference on cybercrime. Christiansen identifies two overlapping threats to corporate security. The line between employees’ online personal and business lives is increasingly porous. At the same time, employees don’t follow their employers’ security policies — probably because they don’t know what they are. The bottom line is that all sorts of things people do at work and at home — including the connection of untested devices and the use of possibly malevolent Web 2.0 sites — can compromise security.

Those seeking more specifics about the threats — the statement that “Web 2.0 is dangerous” is as nebulous as it is threatening — should look at video. vnunet.com says that Chris Rouland, the CTO of IBM’s Internet Security Systems (ISS), made a presentation at the annual summit of the George Tech Information Security Center in which he suggested that video may be the next big target.

More sophisticated Web 2.0 networks hide less fully developed applications and devices that are latent or active threats to security, according to this piece at eChannelLine. The writer, using research from WatchGuard, maintains that the placement of servers running collaboration, VoIP and other advanced services in data centers heightens the risks. These servers are not as mature as older applications and are therefore more vulnerable to clever hackers. This, combined with the fact that the goal is to create more open and interactive networks, means there are more opportunity for hackers.

ZDNet Australia uses the subpoena of Facebook by the Attorney General of New York State for failing to adequately protect young subscribers as a jumping off point for a look at consumer use of Web 2.0 applications. This is an important issue for IT security staffs because it is a given that employees will use consumer services for work purposes or, at least, on the same devices they use in their jobs.

These social sites often are free in exchange for permission to use tracking and data aggregation tools. The problem is a microcosm of Web 2.0 in general: What the site is trying to achieve involves actions or policies that are the exact opposite of good security practice.

Facebook released FBJS

Marcel Laverdet of Facebook blogged about the release of FBJS 1.0:

If you are already used to Javascript, you will find that most of the syntax and functionality that you have come to know and love (or hate) is available in FBJS. Additionally, we’ve created hooks into our higher-level AJAX and dialog implementations which allow you to easily create dynamic experiences while maintaining the look and feel of Facebook.

We hope that FBJS enables you to build deeply integrated Facebook Platform applications in new and interesting ways.

FBJS does munging on the JavaScript that you provide, as it tries to stop naughty things.

With respect to Ajax, they give you an object to work with that sits on top of XHR, and is proxied by Facebook servers:

FBJS supplies a very powerful AJAX object for developers. Facebook will proxy all AJAX requests and optionally run useful post-processing on the data returned, such as JSON, or FBML parsing. To use it, just instantiate a new Ajax class. It supports the following properties:

ondone(data)
An event handler which fires when an AJAX call returns. Depending on .responseType, data will either be an object, a raw string, or an FBML string.
onerror
An event handler which fires when an error occurs during an AJAX call
requireLogin
If you set this to true the AJAX call will require the user to be logged into your application before the AJAX call will go through. The AJAX call will then be made with the regular fb_sig parameters containing the user’s identity. If they refuse to login, the AJAX call will fail.
responseType
This can be one of Ajax.RAW, Ajax.JSON, or Ajax.FBML.
Ajax.RAW
The response from your server will be returned to your callback in its original form.
Ajax.JSON
The response from your server will be parsed as a JSON object and returned to your callback in the form of an object. Properties of your JSON object which are prefixed with “fbml_” will be parsed as individual FBML strings and returned as FBML blocks. These blocks can used on a DOM object with the setInnerFBML method.
Ajax.FBML
The response from your server will be parsed as FBML and returned as an FBML block. This block can used on a DOM object with the setInnerFBML method.

And one method:


post(url, query)
Start an AJAX post. url must be a remote address, and query can be either a string or an object which will be automatically converted to a string.

Here’s an example showing most of the functionality of Ajax:
Ajax Example

It is interesting to see more and more platforms wanting to open up and give users more abilities, but keeping the balance wrt security, privacy, and general abuse.

There are also rumors of new functionality to come, such as a data storage API.

Bill Gates on Web Apps

If you’ve read any tech news in the past 24 hours, you’ll now be familiar with the meeting Bill Gates held among influential bloggers, ahead of next year’s Mix conference. Aside from learning what’s on Bill’s Zune, we get to hear his views on the future of web apps, thanks to a question from Liz Gannes. She asked him which apps should live in the browser and which should not, one of the key questions in Ajax and one we have touched on in the past.

He replied that the distinction would come to be silly from a technical standpoint, but that the necessary movement toward web APIs does present challenges on the business side. “One of the things that’s actually held the industry back on this is, if you have an advertising business model, then you don’t want to expose your capabilities as a web service, because somebody would use that web service without plastering your ad up next to the thing.”

His solution wasn’t very specific: "It’s ideal if you get business models that don’t force someone to say ‘no, we won’t give you that service unless you display something right there on that home page.

Then for the tease: “And, you know, [inside the browser and outside the browser are] moving towards each other, but there’s still a bit of a barrier there, and new technology, things we’re working on, really will change that.”

Making JavaScript Safe with Google Caja

Douglas Crockford continues to bang the drum for securing JavaScript in his latest post:

It is possible to make secure programming languages. Most language designers do not consider that possibility. JavaScript’s biggest weakness is that it is not secure. That puts JavaScript in very good company, but it puts web developers in an untenable position because they cannot build secure applications in an insecure language. JavaScript is currently going through a redesign that is again failing to consider the security of the language. The new language will be bigger and more complex, which will make it even harder to reason about its security. I hope that that redesign will be abandoned.

A more fruitful approach is to remove insecurity from the language. JavaScript is most easily improved by removing defective features. I am aware of two approaches that allow us to build secure applications by subsetting the insecure language.

The first approach is to use a verifier. That is how ADsafe works. A verifier statically analyzes a program, and certifies that the program does not use any of the unsafe features of the language. This does not guarantee that the program is safe, but it makes it possible to make programs that are safe. Any program can compromise its own security. The improvement here is that a program’s security is not compromised by the language it is written in.

The second approach is to use a transformer. A transformer verifies, but it also modifies the program, adding indirection and runtime checks. The advantage of transformers is that they allow the use of a larger subset of the language. For example, ADsafe does not allow the use of the this parameter. A transformer can allow this because it can inject code around it and its uses to ensure that it is never used unsafely. The benefit is that it is more likely that existing programs could run in a safe mode with little or no modification. I think that is a dubious benefit because programs that are not designed to be safe probably are not. The downside is that the final program will be bigger and slower, and debugging on the transformed program will be more difficult.

Both approaches work. But we still need to fix the browser.

A new project, Google Caja, is trying to do source-to-source translation to secure things:


Using Caja, web apps can safely allow scripts in third party content.

The computer industry has only one significant success enabling documents to carry active content safely: scripts in web pages. Normal users regularly browse untrusted sites with Javascript turned on. Modulo browser bugs and phishing, they mostly remain safe. But even though web apps build on this success, they fail to provide its power. Web apps generally remove scripts from third party content, reducing content to passive data. Examples include webmail, groups, blogs, chat, docs and spreadsheets, wikis, and more.

Were scripts in an object-capability language, web apps could provide active content safely, simply, and flexibly. Surprisingly, this is possible within existing web standards. Caja represents our discovery that a subset of Javascript is an object-capability language.

FBJS is also trying to do some of this too. Got some time on Friday to look around some code? Take a look at some Caja.

HP vs. Apple vs. RIM and Microsoft/Cisco Convergence: Battle for the Corporate Phone

I first started in tech with a PBX vendor, arguably the most advanced of its time. One of my positions there was as competitive analyst for phones, and I was able to use prototype devices that were vastly more advanced than anything anyone had ever seen.

Since then, I’ve been in major withdrawal, as these features and phones never made it to most desktops. Most folks are lucky if they can figure out how to do things like conference in a co-worker or transfer a call successfully.

This is a long way of saying the telephony industry doesn’t operate at “Internet speeds.” In fact, changes still seem to take decades — or at least did. That is about to change dramatically.

There are two types of convergence going on. The first is being largely driven by Microsoft, which is the only vendor possibly strong enough (and note I said possibly) to drive common standards across the telephony industry for user features. If it weren’t for Cisco, I’d lay odds it would fail, but Cisco is also a game-changer representing the greatest threat the legacy PBX vendors have faced since IBM, before it failed in this industry. And, unlike IBM, Cisco isn’t learning on the job and has what I believe to be the strongest enterprise VoIP solution in the segment.

The second is the convergence of cell and land-line phones. Northern Telecom tried and failed to do this more than a decade ago, but the technology wasn’t ready, and Northern didn’t have the breadth to make it work. With the surge on smartphones driven by RIM and Apple, coupled with the capability of HP, which just entered the segment with the strongest enterprise cell-phone line, the next step of converging PBX and cell services is closer than it has ever been.

Microsoft Background:

This is the fourth time Microsoft has made a run at phones. In the early ’90s there was a joke phone created by Microsoft Europe that floated around for a while. A consumer phone followed, one that depended on Windows 95 for features and was probably the worst telephone I’d ever seen in my life. After NT came out, a number of small PBX vendors used that platform in an embedded-like form and created what were the most reliable Windows Servers of their time (something that surprised most of us).

This is only to show that Microsoft’s experience here tracks back over a decade into both devices and switches, and while the results have been mixed, the company has an established knowledge base.

It’s also interesting that Microsoft itself used the cheapest and most limited phone systems in the market for much of its existence, and that likely motivated the company to create a solution that it could use that wasn’t so incredibly out-of-date. Sometimes self-interest is the best motivator.

It should be noted that Microsoft is hedging its bets by bringing out its own converged product in the SMB market, where technology change could happen more quickly. This is because systems in that market, called key systems, are even more antiquated than what many PBX enterprises use

Microsoft and the First Linux Patent Suit: Conspiracy Theories Explored

The first Linux patent suit brought is being creatively connected back to Microsoft through two employees that Acacia Technologies Group has hired from Microsoft. What makes it unlikely that Microsoft is behind this is that the targets of the action are both Red Hat and Novell.

Microsoft has zero interest in taking Novell to court right now and it is using the argument of indemnification to encourage others to create similar agreements. If the indemnification doesn’t work – in other words, if you are going to get sued anyway – what’s the point in doing an agreement with Microsoft?

The action is being brought by IP Innovation and Technology Licensing Corp., a subsidiary of Acacia Technologies Group, which specializes in enforcing intellectual property. Novell is evidently a client of Acacia but there is no known connection, other than the two recent Microsoft hires, to Microsoft. (Typically you don’t take a client to court.)

Let’s first explore the conspiracy theory that the open source folks would like to believe; a second, more-likely scenario; and what I think is actually the case.

Building a Conspiracy Theory

Just because it doesn’t appear to make sense for someone to do something doesn’t mean they wouldn’t actually do it. Let’s now assume Microsoft was behind this. What would its goal be in going after both firms? A recent post by Mary Jo Foley at ZDNet got me thinking of this scenario.

If the indemnification holds, Microsoft would now, as an apparent third party, come to Novell’s defense and, because Microsoft was on both sides, easily win on Novell’s behalf. Meanwhile, Red Hat would run up massive legal fees and be pounded by a much more hostile and sustaining action. Win or lose, the lesson would be clear: Licensing with Microsoft has solid benefits that can, after this is over, be more easily demonstrated.

It would be a brilliant strategy if Microsoft could execute it, but Microsoft leaks information like a sieve. It would undoubtedly get caught and the end result would be incredibly painful. In short, I view it as virtually impossible that such a strategy could get approval and the sequence of known events doesn’t map to this strategy at all. The first known action on this IP was against Apple, not any known open source company (IBM appears to have protection).

Hollywood Special Effects with Adobe Premiere Elements 3

Download ebooks for free daily update

Hollywood Special Effects with Adobe Premiere Elements 3 is a book that will help users get to the next level in video editing, and that next level goes beyond simply splicing together clips and creating simple titles. In no time readers will be overlaying multiple tracks of videos and adjusting transparency; creating Picture-in-Picture overlays; using key frames and motion paths; setting and refining greenscreens and bluescreens; using color effects for emotional impact; and a whole range of other special effects to help them tell their story. What sets this book apart is the author's expertise in carefully showing readers how to execute each of these effects step-by-step in a clear and friendly writing style. With this book, budding filmmakers will be well on their way to becoming the next George Lucas! The accompanying DVD contains royalty free music, sound effects, and video clips from Artbeats, Digital Juice, TwistedTracks, and the Footage Firm (among others).

Download Free Hollywood Special Effects with Adobe Premiere Elements 3
Download



Download Here Honey :) Enjoy, Please Buy The Original, ok?


Photoshop CS Killer Tips (Killer Tips)

Download ebooks for free daily update

Publisher: New Riders Press
Number Of Pages: 256
Publication Date: 2004-02-17
Sales Rank: 404319
ISBN / ASIN: 0735713561
EAN: 0752064713562
Binding: Paperback
Manufacturer: New Riders Press
Studio: New Riders Pres]

Download Free Photoshop CS Killer Tips (Killer Tips)
Downlaod



Download Here Honey :) Enjoy, Please Buy The Original, ok?

3D Computer Graphics

3D computer graphics (in contrast to 2D computer graphics) are graphics that utilize a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object (either inanimate or living). A model is not technically a graphic until it is visually displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.


OVERVIEW

The process of creating 3D computer graphics can be sequentially divided into three basic phases: 3D modeling which describes the process of forming the shape of an object, layout and animation which describes the motion and placement of objects within a scene, and 3D rendering which produces an image of an object.


MODELING

The model describes the process of forming the shape of an object. The two most common sources of 3D models are those originated on the computer by an artist or engineer using some kind of 3D modeling tool, and those scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation.


LAYOUT AND ANIMATION

Before an object is rendered, it must be placed (layout/laid out) within a scene. This is what defines the spatial relationships between objects in a scene including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture, though many of these techniques are used in conjunction with each-other. As with modeling, physical simulation is another way of specifying motion.


RENDERING

Rendering converts a model into an image either by simulating light transport to get photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. The process of altering the scene into a suitable form for rendering also involves 3D projection which allows a three-dimensional image to be viewed in two dimensions.


DISTINCT FROM PHOTOREALISTIC 2D GRAPHICS

Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters.

La primera actualización de Windows Vista saldrá en 2008


Microsoft ha revelado sus planes de lanzar la primera gran actualización de su sistema operativo Windows Vista a comienzos del próximo año.
Muchas de las grandes empresas clientes de Microsoft esperan el lanzamiento del primer "service pack" - un conjunto de programas de refuerzo, actualizaciones y mejoras - antes de introducir un nuevo sistema operativo de Windows.

Los clientes corporativos a menudo se muestran remisos a adoptar nuevos programas para que Microsoft tenga el tiempo necesario para solucionar los problemas experimentados por los usuarios regulares, que suelen comprar sus ordenadores nuevos con el último sistema operativo ya instalado.

En una nota colgada en su página web, Microsoft adelantó sus planes de comenzar las pruebas de Windows Vista SP1 en una pequeña audiencia en unas pocas semanas, y espera distribuir el producto entre los fabricantes de ordenadores en el primer trimestre de 2008.

Microsoft ha anunciado que el primer paquete no es tan significativo como en el pasado, porque la compañía puede ahora instalar parches y refuerzos en el producto por medio de actualizaciones a través de Internet.

Windows Vista SP1 debería mejorar la seguridad del sistema operativo, su estabilidad y su rendimiento, pero no cambiará su apariencia ni añadirá ninguna función importante, añadió.

Microsoft también ha comunicado que retrasará la fecha de distribución del Windows Server 2008 entre los fabricantes de hardware.

Además anunció sus planes de lanzar el tercer "service pack" de Windows XP, el predecesor de Vista, en las próximas semanas. Se distribuirá entre los fabricantes de ordenadores en el primer semestre de 2008.

Toplist