This preview has intentionally blurred parts. Sign up to view the full document

View Full Document

Unformatted Document Excerpt

not Memory released in SoftRefFilesCache Implementation of method putFile(final FileObject file) does not remove references from refReverseMap when adding a new file. #current implementation {noformat} synchronized (files) { files.put(file.getName(), ref); synchronized(refReverseMap) { refReverseMap.put(ref, key); } } {noformat} #should become: {noformat} synchronized (files) { Reference old = files.put(file.getName(), ref); synchronized(refReverseMap) { refReverseMap.remove(old); refReverseMap.put(ref, key); } } {noformat} HTTP only allows reading from one file at a time VFS 164 modified HttpClientFactory to use a single connection per thread. The consequence of this is that only a single file can be accessed at a time. Several applications, such as Commons Configuration and XML includes will read a second file while processing the first. In the case of Commons Configuration an IOException is being thrown when the first file is closed because it was already closed by ThreadLocalHttpConnectionManager. Memory leaks in DIH If delta-import is executed many times, the heap utilization grows up and finally OutOfMemoryError occurs. When delta-import is executed with SqlEntityProcessor, the instances of TemplateString cached in VariableResolverImpl#TEMPLATE_STRING#cache. If the deltaQuery contains variable like `last_index_time', the cached values never used increases. Similarly, I guess that the cache increases when fetching each modified row with primary key. I think these queries should not be cached. I came up with two solutions: 1) Not to cache queries to get modified rows. 2) Make VariableResolverImpl#TEMPLATE_STRING non-static. Or clear cache on finishing delta-import. I think that #1 is better for performance than #2, but #2 is easier to solve the problem. I made a patch in #2 way, and then tested two solr applications with `-XX:+PrintClassHistgram' option. The result after importing several million rows from a MySQL database is as follows: * original solr-1.3: num #instances #bytes class name ---------------------------------------------- ... 6: 2983024 119320960 org.apache.solr.handler.dataimport.TemplateString ... * patched solr-1.3: num #instances #bytes class name ---------------------------------------------- ... 748: 3 120 org.apache.solr.handler.dataimport.TemplateString ... Though it is version 1.3 that I tested, perhaps current nightly version has same problem. Farm deployment of configurations using JNDI resource references does not work Due to the transformation of the name of a configuration when it is distributed to a server farm (i.e. the _G_SLAVE suffix is appended) JNDI resource references can not be resolved. Here is an user provided stack trace which illustrates this problem: {code} java.lang.IllegalStateException: No configuration found for id: clustering/clustering/2.1/war at org.apache.geronimo.naming.reference.AbstractEntryFactory.getConfiguration(AbstractEntryFactory.java:110) at org.apache.geronimo.naming.reference.AbstractEntryFactory.resolveTargetName(AbstractEntryFactory.java:126) at org.apache.geronimo.naming.reference.AbstractEntryFactory.getGBean(AbstractEntryFactory.java:64) at org.apache.geronimo.naming.reference.ResourceReferenceFactory.buildEntry(ResourceReferenceFactory.java:44) at org.apache.geronimo.naming.reference.ResourceReferenceFactory.buildEntry(ResourceReferenceFactory.java:33) at org.apache.geronimo.naming.enc.EnterpriseNamingContext.createEnterpriseNamingContext(EnterpriseNamingContext.java:55) at org.apache.geronimo.tomcat.TomcatWebAppContext.<init>(TomcatWebAppContext.java:181) {code} Error in Feature Type creation Error with <if-regexp field element this field calls "field-name" instead of "field" PerformanceFilter info message incorrect The excludes path warning message is incorrect and refers to the the cachable-path: String message = "cachable-path '" + path + "' ignored, " + "path must start or end with a wildcard character: *"; getConfigService().getLogService().warn(message); Logging oversight in DB2Dictionary There is a small oversight in DB2Dictionary - there is a check whether TRACE level is enabled and then an exception is logged on ERROR level. hints don't work for NamedNativeQuery Hints defined for a named native query such as the one below don't get loaded, however, if changed it to @NamedQuery, the hints get loaded. @NamedNativeQuery(name="GetMemberInfo", query="CALL MEMBERSUB", hints= { @QueryHint(name="openjpa.hint.u2sub.numberofpara", value="2"), @QueryHint(name="openjpa.hint.u2sub.output.para", value ="2") } ) DistributedTemplate is incorrectly setting some attributes on the statements DistributedTemplate.java in openjpa-slice is not setting the setQueryTimeout() and setMaxRows() properties on its List of statements, but instead is setting the value on setMaxFieldSize() instead. AbstractIoSession#getId() can cause collisions which lead to sessionClosed calls without sessionCreated While investigating the root case of the problem described at http://www.nabble.com/Counting-connections-in-MINA-tt22200162.html I found that it's most likely caused by the collisions of the session ids. See org.apache.mina.core.service.IoServiceListenerSupport#fireSessionCreated: // If already registered, ignore. if (managedSessions.putIfAbsent(Long.valueOf(session.getId()), session) != null) { return; } If the new created session has the same id as the one already managed by the service, this method returns and sessionCreated/sessionOpened are not invoked in the IoHandler. It's not surprising as the JavaDoc for getId() says: TODO this method implementation is totally wrong. It has to be rewritten. This problem is pretty serious as under heavy load you will get several collisions per hour. Basically, you will get sessionClosed with empty attributes map which will not correspond to any sessionOpened/sessionCreated. Which leads to inability to track anything via session attributes (consider a simple counter for connected IP addresses which gives you a number of users connected per IP). There is probably also some race condition as in some cases duplicate id doesn't cause such problem. It must be investigated further. I've rewritten the getId() method to use AtomicLong incremented in the constructor of the class and it has fixed the problem for me. I'm attaching the test case which reproduces the problem on my machine. You may need to run it several times to get the expected result or adjust the number of created sessions and the packet size. Sample output with current getId() implementation: [2009-03-01 01:06:43,070] START [2009-03-01 01:06:44,503] DUPLICATE SESSION CLOSED WITHOUT CREATED/OPEN: (0x01028859: nio socket, server, null => null) / DUPLICATE ID: true [2009-03-01 01:06:47,172] DUPLICATE SESSION CLOSED WITHOUT CREATED/OPEN: (0x012BDC2C: nio socket, server, null => null) / DUPLICATE ID: true [2009-03-01 01:06:51,187] DUPLICATE SESSION CLOSED WITHOUT CREATED/OPEN: (0x012C0881: nio socket, server, null => null) / DUPLICATE ID: true [2009-03-01 01:06:55,398] FINISH Sample output with getId() implementation using AtomicLong(): [2009-03-01 01:08:00,728] START [2009-03-01 01:08:12,653] FINISH I have no time for additional investigations for now, hope it helps. The proposed solution is to either rewrite id generation or make the behavior consistent in case duplicate id is generated. Duplicate ids should not stop the application from normal operation. Mina configuration is shared between endpoints Establishing a MINA endpoint with a custom codec, and then establishing another without a custom codec is a problem. The second endpoint inherits the first endpoint's codec. My recommendation is to not share configuration data between endpoint creation. I recommend that the MINA component instantiate a new configuration for each new endpoint instead of copying the previous configuration. As a workaround the user can specify "codec" as a URI parameter with no value. Don't attempt to stop fragments in filemonitor Now that we have updated to newer Felix that partially supports bundle fragments, we have to honor the OSGi spec. Fragments can't be started/stopped. Unknown buffered image type for PNG image The image type of a PNG image is TYPE_CUSTOM With this type is unabled to create a new buffered image Avoid classloading issue if an alreday manipulated class is used by the API iPOJO API manipulates automatically non-manipulated class, and loads them in the internal iPOJO classloader. Already manipulated classes are also loaded with the classloader. This can be avoided if the class is already manipulated by using the "bundle/regular" classloader. unavailable robots.txt kills fetch I think there is another robots.txt-related problem which is not adressed by NUTCH-344, but also results in an aborted fetch. I am sure that in my last fetch all 17 fetcher threads died while they were waiting for a robots.txt-file to be delivered by a not properly responding web server. I looked at the squid access log, which is used by all fetch threads. It ends with many HTTP-504-errors ("gateway timeout") caused by a certain robots.txt url: <....> 1166652253.332 899427 127.0.0.1 TCP_MISS/504 1450 GET http://gso.gbv.de/robots.txt - DIRECT/193.174.240.8 text/html 1166652343.350 899664 127.0.0.1 TCP_MISS/504 1450 GET http://gso.gbv.de/robots.txt - DIRECT/193.174.240.8 text/html 1166652353.560 899871 127.0.0.1 TCP_MISS/504 1450 GET http://gso.gbv.de/robots.txt - DIRECT/193.174.240.8 text/html These entries mean that it takes 15 minutes before the request ends with a timeout. This can be calculated from the squid log, the first column is the request time (in UTC seconds), the second column is the duration of the request (in ms): 900000/1000/60=15 minutes. As far as I understand it, every time a fetch thread tries to get this robots.txt-file the thread busy waits for the duration of the request (15 minutes). If this is right, then all 17 fetcher threads were caught in this trap at the time when fetching was aborted, as there are 17 requests in the squid log which did not timeout before the message "aborting with 17 threads" was written to the nutch-logfile. Setting fetcher.max.crawl.delay can not help here. I see 296 access attempts in total concerning this robots.txt-url in the squid log of this crawl, but fetcher.max.crawl.delay is set to 30. Logging Pojo in Servicemix-bean should add capability to switch off checking for maximum message display size The LoggingPojo in servicemix-bean logs all exchanges whether the contents of an incoming exchange exceeds the predefined maximum message display size "maxMsgDisplaySize" or not. If yes, it will trim down the contents of the incoming exchange to predefined display size. But you may want to print out the whole contents from all of the incoming exchanges and void this message display size checking. So by changing the code to that in the attached patch if we set "maxMsgDisplaySize" to any negative values such as "-1" it would effectively switch off checking on the maximum display size and allow full contents to be printed. Neko1.9.11 goes into a loop Neko1.9.11 goes into a loop on some documents e.g. http://mediacet.com/Archive/FourYorkshiremen/bb/post.htm http://cizel.co.kr/main.php reverting to 0.9.4 seems to fix the problem The approach mentioned in https://issues.apache.org/jira/browse/NUTCH-696 could be a way to alleviate similar issues PS: haven't had time to report to the Neko people yet, will do at some stage JMS consumer to provider route fails without DEBUG logging enabled When running a JMS consumer to JMS provider route and DEBUG logging has not been enabled, you run into: {noformat} 16:07:17,057 | INFO | pool-flow.seda.servicemix-jms-thread-1 | PhaseInterceptorChain | oap.core.PhaseInterceptorChain 89 | Interceptor has thrown exception, unwinding now org.apache.servicemix.soap.api.Fault: javax.xml.stream.XMLStreamException: Can not create StAX reader for the Source passed - neither reader, input stream nor system id was accessible; can not use other types of sources (like embedded SAX streams) at org.apache.servicemix.soap.util.stax.StaxUtil.createReader(StaxUtil.java:75) at org.apache.servicemix.soap.interceptors.xml.BodyOutInterceptor.handleMessage(BodyOutInterceptor.java:37) at org.apache.servicemix.soap.core.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:85) at org.apache.servicemix.soap.interceptors.xml.StaxOutInterceptor.handleMessage(StaxOutInterceptor.java:51) at org.apache.servicemix.soap.core.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:85) at org.apache.servicemix.jms.endpoints.DefaultProviderMarshaler.createMessage(DefaultProviderMarshaler.java:75) at org.apache.servicemix.jms.endpoints.JmsProviderEndpoint.processInOnlyInSession(JmsProviderEndpoint.java:546) at org.apache.servicemix.jms.endpoints.JmsProviderEndpoint$1.doInJms(JmsProviderEndpoint.java:516) at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:437) at org.apache.servicemix.jms.endpoints.JmsProviderEndpoint.processInOnly(JmsProviderEndpoint.java:527) at org.apache.servicemix.jms.endpoints.JmsProviderEndpoint.process(JmsProviderEndpoint.java:484) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.seda.SedaFlow.doRouting(SedaFlow.java:167) at org.apache.servicemix.jbi.nmr.flow.seda.SedaQueue$1.run(SedaQueue.java:134) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:595) Caused by: javax.xml.stream.XMLStreamException: Can not create StAX reader for the Source passed - neither reader, input stream nor system id was accessible; can not use other types of sources (like embedded SAX streams) at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:768) at com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:341) at org.apache.servicemix.soap.util.stax.StaxUtil.createReader(StaxUtil.java:73) ... 20 more {noformat} SMX4NMR-110 When a component is stopped and restarted, service assemblies are stopped, but not restarted Wrong resolution of hostname and port I noticed the following for one of the hosts in a cluster: 1. machines.jsp page resolves the http address as just "http://hostname" (which doesn't work). It doesnt put the port number for the host. Even if I add the port number manually in the URI, the task tracker page does not come up. 2. All the tasks(both maps and reduces) which ran on the machine ran successfully. But tasklogs cannot be viewed, because port-number is not resolved. ( same problem as in (1)). 3. The reducers waiting for maps ran on that machine fail with connection failed errors saying the hostname is 'null'. Can't find bundle for base name org.apache.servicemix.kernel.gshell.wrapper.InstallCommand, locale fr_BE When the command help is launched for the wrapper, the following error is returned : smx@root:wrapper> help NAME install DESCRIPTION ERROR CommandLineExecutionFailed: org.apache.geronimo.gshell.command.CommandException: java.util.MissingResourceExcept ion: Can't find bundle for base name org.apache.servicemix.kernel.gshell.wrapper.InstallCommand, locale fr_BE 405 error when create plugin via admin console 1.login admin console, go to "application"->"plugin"->"create plugin" section ->choose an item and click "export"->click next to "save plugin data" but error page: HTTP Status 405 - HTTP method POST is not supported by this URL type Status report message HTTP method POST is not supported by this URL description The specified HTTP method is not allowed for the requested resource (HTTP method POST is not supported by this URL). Enabling heartbeats causes segfault on failover If a client with heartbeats enabled fails over, the heartbeat task on the old ConnectionImpl object can fire and attempt to invoke on the now dangling pointer. Unable to delete project and project is not able to update itself We have a problem with the following project: http://vmbuild.apache.org/continuum/buildResult.action?buildId=139576&projectId=754 It doesn't get update from svn correctly (it worked a while ago) - the pom.xml is missing although it is in svn. And deleting the project results in a database error. Either deleting this project or fixing the update problem would help us. Thanks Exception thrown in/from RecoveryManager.recover() should be caught and handled {{RecoveryManager.recover()}} can throw an exception while recovering a job. Since the {{JobTracker}} calls {{RecoveryManager.recover()}} from {{offerService()}}, any failure in recovery will cause {{JobTracker}} to crash. Ideally the {{RecoveryManager}} should log the failure encountered while recovering the job and continue. FileInstall can't handle autostart bundles that are part of watched directory Let's say you configure File Install to watch directory /tmp/modules and there is a bundle called /modules/foo.jar which is specified as an autostart bundle. In Felix's config.properties, autostart bundles are specified using URLs (file:///modules/foo.jar in this case) which is internally used as the location of the bundle as well. On the other hand, File Install uses "absolute path" (/modules/foo.jar in this case) as the location. As a result, when File Install tries to install the same bundle, it gets the following error: INFO: failed to install/start bundle: : org.osgi.framework.BundleException: Could not create bundle object. This is caused by org.osgi.framework.BundleException: Bundle symbolic name and version are not unique. ImageRewriter can return 0-byte responses The Image Rewriter can sometimes generate invalid conversions. These have a 0 byte size. We should recognize the a 0 byte sized image is not valid and use the original image in this case. Incorrect Logger name creation in ComponentContextImpl This error was found when creating a java.util.logging.Logger from the JBI ComponentContext. The resulting logger does not have the expected name. For example: Component Name: "myComponent" Logger Suffix: "Bootstrap" Expected Logger Name: "myComponentBootstrap" Actual Logger Name: "BootstrapmyComponent See attached patch. RenderingContentRewriterTest: NPE in Gadget.sanitizeOutput There are various exceptions in RenderingContentRewriterTest, e.g. Test set: org.apache.shindig.gadgets.render.RenderingContentRewriterTest ------------------------------------------------------------------------------- Tests run: 25, Failures: 0, Errors: 23, Skipped: 0, Time elapsed: 0.184 sec <<< FAILURE! defaultOutput(org.apache.shindig.gadgets.render.RenderingContentRewriterTest) Time elapsed: 0.01 sec <<< ERROR! java.lang.NullPointerException at org.apache.shindig.gadgets.Gadget.sanitizeOutput(Gadget.java:141) at org.apache.shindig.gadgets.render.RenderingContentRewriter.rewrite(RenderingContentRewriter.java:124) at org.apache.shindig.gadgets.render.RenderingContentRewriterTest.rewrite(RenderingContentRewriterTest.java:122) at org.apache.shindig.gadgets.render.RenderingContentRewriterTest.defaultOutput(RenderingContentRewriterTest.java:130) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127) at org.apache.maven.surefire.Surefire.run(Surefire.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:338) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:997) FieldSet isDisabled and isReadonly methods broken The FieldSet is supposed to force its child components to be disabled/readonly when it is set to disabled/readonly. I did not observe this when I attempted to create a FieldSet with a child component. I believe the code to support this is not working as anticipated. The Field class has modified methods for isDisabled/isReadonly that specifically check if the parent component (i.e. container) is an instanceof FieldSet (or Form, which is working AFAIK). The problem is that the design of FieldSet relies on an instance of its private inner class FieldSet.InnerContainerField for managing those child elements. When I step through the code in debug mode, the class instance is of this inner class type (InnerContainerField) not FieldSet. Since InnerContainerField is not a type of FieldSet, the subsequent logic is ignored. Fail to install a plugin from a remote repository steps: 1. login admin console 2. click Applications->plugins 3. add http://9.123.237.40/localrepo/ce_repo/ as Repository 4. choose "Geronimo Plugins,Monitoring ::Agent JMX CAR" showed in the list to install 5. No success or failed installation message returned and the page will stay in: Current file being operated on... Portal Site Manager portlet not save "unhidden" state for PSML Portal Site Manager / PSML / Information if "hidden" checkbox checked - uncheck state not save after click "save" button. call to undefined subroutine strlen() Undefined subroutine &Thrift::Socket::strlen called at /opt/perl-5.8.7/lib/site_perl/Thrift/Socket.pm line 244. BinStorage skips tuples when ^A is present in data Pradeep found a problem with BinStorage.getNext function that causes data loss. He is working on the fix outputSchema method in TOKENIZE is broken The outputSchema method in TOKENIZE is broken. It should return a bag with a tuple that contains a string and not just a string. Unannotated spring context's are not interpreted correctly w.r.t references vs properties. From this thread (http://www.mail-archive.com/dev@tuscany.apache.org/msg05518.html) I'm looking at a Spring based payment component in the travel sample in the sandbox [1] and am having problems in the case where the Spring context and the implementations it references are unannotated. I have a bean... public class PaymentImpl implements Payment { private CreditCardPayment creditCardPayment; private EmailGateway emailGateway; public void setCreditCardPayment(CreditCardPayment creditCardPayment) { this.creditCardPayment = creditCardPayment; } public void setEmailGateway(EmailGateway emailGateway) { this.emailGateway = emailGateway; } a context... <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sca="http://www.springframework.org/schema/sca" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="Payment" class="payment.PaymentImpl"> <property name="creditCardPayment" ref="creditCardPaymentReference"/> <property name="emailGateway" ref="EmailGateway"/> </bean> <bean id="EmailGateway" class="scatours.emailgateway.EmailGatewayImpl"> </bean> </beans> and a composite component... <component name="PaymentComponent"> <implementation.spring location="Payment-context.xml"/> <service name="Payment"> <interface.wsdl interface="http://www.tuscanyscatours.com/Payment/#wsdl.interface(Payment)" /> <binding.ws uri="http://localhost:8080/Payment" wsdlElement="http://www.tuscanyscatours.com/Payment/#wsdl.service(PaymentService)"/> </service> <reference name="creditCardPaymentReference"> <binding.ws uri="http://localhost:8081/CreditCardPayment"/> </reference> </component> Now the introspection that goes on inside SpringXMLComponentTypeLoader [2] struggles as it doesn't have access to the information from the SCDL. It introspects the bean to try and decide which fields shoulf be references and which should be properties and it can't tell the difference. What I think should be happing is that it should be looking back at the component in the SCDL to see what has actually been defined as a reference and what has been defined as a service and using this information to build the component type. However there is a lot of code here so I'm loooking for someone familiar with the implementation.spring code to tell me what I've missed. Almost all archetypes generate extra http:// for xmlns:xsi namespace. Almost all archetypes generate extra http:// for xmlns:xsi namespace. The org.apache.servicemix.kernel.management bundle should not use DynamicImport-Package=* jms archetypes generate extra http:// for xmlns:xsi namespace. JMS archetypes servicemix-jms-consumer-service-unit and servicemix-jms-provider-service-unit generate extra http:// for entry xmlns:xsi in xbean.xml. How to Reproduce: Use maven archetype from eclipse maven project (with m2Eclipse) or use normal commandline to generate project. Site Manager throws error when expanding the tree view Site Manager when expanding the tree view. This seems to cause a problem on Websphere, not Tomcat. Expand button for any folder in the Site View. It fails to expand and puts up a browser error. deployment issue for Portlets due to referential integrity voliation in database When deploying some portlets the following error occurs as the portlet tries to register with the portal. This happens with the demo portlet that comes with Jetspeed 2.1.3 (demo-2.1.3.war). [1/14/08 10:12:01:192 EST] 00000030 SystemOut O ERROR: Failed to register portlet application, demo org.springframework.dao.DataIntegrityViolationException: OJB operation; SQL []; ORA-01400: cannot insert NULL into ("TOMCAT"."PARAMETER"."PARAMETER_VALUE") ; nested exception is java.sql.SQLException: ORA-01400: cannot insert NULL into ("TOMCAT"."PARAMETER"."PARAMETER_VALUE") Caused by: java.sql.SQLException: ORA-01400: cannot insert NULL into ("TOMCAT"."PARAMETER"."PARAMETER_VALUE") at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288) at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216) at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1169) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3368) at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteUpdate(WSJdbcPreparedStatement.java:948) at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeUpdate(WSJdbcPreparedStatement.java:615) at org.apache.ojb.broker.accesslayer.JdbcAccessImpl.executeInsert(JdbcAccessImpl.java:216) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1754) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:813) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:726) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeAndLinkOneToMany(PersistenceBrokerImpl.java:1057) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeCollections(PersistenceBrokerImpl.java:928) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1776) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:813) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:726) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeAndLinkOneToMany(PersistenceBrokerImpl.java:1057) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeCollections(PersistenceBrokerImpl.java:928) at org.apache.ojb.broker.core.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1776) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:813) at org.apache.ojb.broker.core.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:726) at org.apache.ojb.broker.core.DelegatingPersistenceBroker.store(DelegatingPersistenceBroker.java:175) at org.apache.ojb.broker.core.DelegatingPersistenceBroker.store(DelegatingPersistenceBroker.java:175) at org.springframework.orm.ojb.PersistenceBrokerTemplate$9.doInPersistenceBroker(PersistenceBrokerTemplate.java:243) at org.springframework.orm.ojb.PersistenceBrokerTemplate.execute(PersistenceBrokerTemplate.java:138) at org.springframework.orm.ojb.PersistenceBrokerTemplate.store(PersistenceBrokerTemplate.java:241) at org.apache.jetspeed.components.portletregistry.PersistenceBrokerPortletRegistry.registerPortletApplication(PersistenceBrokerPortletRegistry.java:22 9) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:139) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:107) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy0.registerPortletApplication(Unknown Source) at org.apache.jetspeed.tools.pamanager.PortletApplicationManager.registerPortletApplication(PortletApplicationManager.java:370) JMSMessage vanishes attachments When using this kind of route <from uri="activemq:queue:test"/> <camel:process ref="mailProcessor" /> <to uri="smtp://localhost:25?to=user@localhost" /> and trying to enrich the message in the mailProcessor with exchange.getIn().addAttachment("attachement.txt", new DataHandler("Hello world", "text/plain")); The received mail doesn't contains any attachment. If the input "from" is a "direct" instead of activemq, it works fine. Inspecting source code, MessageSupport.copyFrom(Message that) does getAttachments().putAll(that.getAttachments()); but the child class JmsMessage doesn't. Typo in the POM There is a typo in the main POM: localhost% mvn test [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: unknown POM Location: /Users/fermigier/svn/abdera-clean/pom.xml Reason: Parse error reading POM. Reason: Unrecognised tag: 'reposiories' (position: START_TAG seen ...</properties>\n \n <reposiories>... @446:16) for project unknown at /Users/fermigier/svn/abdera-clean/pom.xml [validation] RichText validator carries transitive dependency on sun tools.jar via htmlparser [basics] Magma exceptions does not always print the message of the exception in stack traces FormComponentPanel should not add a name attribute FormComponent adds a name attribute in onComponentTag. This behaviour is inherited by FormComponentPanel, but is not valid for the latter. Often a FormComponentPanel is a div or span, for which the name attribute is not allowed. org.apache.servicemix.specs.locator-1.1.1.jar useless? The DOSGI multi bundle distribution contains the org.apache.servicemix.specs.locator-1.1.1.jar that is supposed to be a bundle. But this file does not contain any OSGi metadata (so no activator, import/exported packages) and no additional files like XML's to be discovered by some extender. So, is it useful? CamelContext - Add ClassResolver to be used when you need to load a class instead of ObjectHelper.loadClass to work in OSGi environments Add the skeleton and let Willem add the stuff in camel-osgi Error occured when repeatedly create DB or records steps: 1. login console 2. Click Embedded DB -> DB Manager. 3. Filling the Create DB blank with TestDB, click Create. 4.Create a table using: CREATE TABLE CUSTOMER ( ID INTEGER NOT NULL PRIMARY KEY, NAME VARCHAR(45), BIRTHDATE DATE, SSS_NO VARCHAR(25), ADDRESS VARCHAR(60), ANNUAL_SALARY DOUBLE, LOAN_AMOUNT DOUBLE ); 5. Insert a record: INSERT INTO CUSTOMER VALUES (001'hi','02/19/2009','111','somewhere',100,150); Now, if create the same table again, or Insert the same record, errors will occur. Also the Table can't be accessed any more. ActiveMQ connectors default to 0.0.0.0 when ServerHostname is set to localhost or actual IP. Ron Staerker reported that if you change ServerHostname in config-substitutions.properties from 0.0.0.0 to 127.0.0.1, the default ActiveMQ connectors on 61613 and 61616 will still bind to 0.0.0.0 instead of the new ServerHostname value. This seems to b caused by several pom.xml problems, where: <config-property-setting name="ServerUrl">tcp://${PlanServerHostname}:${PlanActiveMQPort}</config-property-setting> where PlanServerHostname is 0.0.0.0 and not in config-substitutions.properties <attribute name="host">#{ServerHostname}</attribute> is being substituted in at build time to be 0.0.0.0 instead of putting ${ServerHostname} in the plans. Elements <jndi-name>, <jndi-local-name> and <jndi> ignored in openejb-jar.xml java 0-10 client doesn't report execution exceptions correctly The client ignores the value of the execution exception when figuring out whether to become detached or closed and incorrectly because detached rather than closed. This causes the session to hang waiting for failover when it is used rather than to report the exception. Using # notation to reference CXF serviceClass is not working See issue reported in mailing list. http://www.nabble.com/camel-cxf-endpoint---error-%3A-Failed-to-convert-property-value-of-type--...-tp22312601p22312601.html Dependencies don't inherit exclusions from dependencyManagement In Maven 2, exclusions for a dependency can be given in the dependencyManagement element of an ancestor pom. These exclusions aren't correctly reproduced by Ivy. I have several projects here managed by Maven 2, and inheriting from a common ancestor. The pom of this ancestor includes the following fragment: <dependencyManagement> <dependencies> <dependency> <groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.15</version> <exclusions> <exclusion><groupId>javax.mail</groupId><artifactId>mail</artifactId></exclusion> <exclusion><groupId>javax.jms</groupId><artifactId>jms</artifactId></exclusion> <exclusion><groupId>com.sun.jdmk</groupId><artifactId>jmxtools</artifactId></exclusion> <exclusion><groupId>com.sun.jmx</groupId><artifactId>jmxri</artifactId></exclusion> </exclusions> </dependency> </dependencies> </dependencyManagement> So in any inheriting project I can simply depend on log4j and get the correct version without the listed dependencies. This is important as some of these dependencies cannot be resolved from the main Maven repository. They are in fact optional, but not listed as such in the log4j pom. It is also Maven practice to list such exclusions in the common ancestor instead of repeating them in every module depending on log4j. Ivy 2 doesn't reproduce this kind of inherited exclusions. When I have an ivy project depending on one of my projects, the modules are not excluded, resulting in download errors for obscure packages. Looking at the ivy descriptors in cache I find the missing exclusions. To fix this, PomDependencyMgtElement in PomReader.java would have to learn to look out for exclusions. This information could than be used by PomModuleDescriptorBuilder. More precisely, addDependencyMgt would have to store it with the desciptor, and addDependency could then incorporate in its inheritance calculations. The whole setup with extra information, with keys calculated using getDependencyMgtExtraInfoKeyFor* and values restricted to strings, seems ill suited to express the structure of the dependency management information. I would prefer the ivy.xml to contain a m2:dependencyManagement element, and use Maven POM syntax within that element. I guess this approach would require larger modifications, though, so I doubt that's a good idea for 2.0 at least. The LDIF parser does not correctly parse changes Trying to parse such a LDIF : # principal: 0.9.2342.19200300.100.1.1=admin,2.5.4.11=system # timestamp: 1235905921781 # revision: 1235905920724 dn: 2.5.4.3=the person,2.5.4.11=system changeType: modify add: objectclass objectclass: organizationalPerson objectclass: inetOrgPerson - you get an error message, as the last '-' is considered as a wrong token Comparison of schemas of bincond operands is flawed The comparison of schemas of bincond is flawed. Instead of comparing the field schemas, the type checker is comparing the schemas. NullPointer exception when fetching children of a node When I browse to a specific node in my directory I get the following exception. Error while reading entry java.lang.NullPointerException java.lang.NullPointerException Ldif of the branch in question (some info removed): dn: ou=principals,o=directory objectClass: organizationalUnit objectClass: top ou: principals dn: krb5PrincipalName=krbtgt/KERBEROSDOMAIN@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: krbtgt/KERBEROSDOMAIN dn: krb5PrincipalName=kadmin/changepw@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: kadmin/changepw dn: krb5PrincipalName=kadmin/admin@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: kadmin/admin dn: krb5PrincipalName=changepw/kerberos@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: changepw/kerberos dn: krb5PrincipalName=kadmin/hprop@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: kadmin/hprop dn: krb5PrincipalName=default@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: default dn: krb5PrincipalName=richard@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry objectClass: shadowAccount uid: richard userPassword:: aG50bWhhdm5iIQ== dn: krb5PrincipalName=ldap/pinkfloyd.KERBEROSDOMAIN@KERBEROSDOMAIN,ou=principals,o=directory objectClass: top objectClass: account objectClass: krb5Principal objectClass: krb5KDCEntry uid: ldap/pinkfloyd.KERBEROSDOMAIN Unable to restart after disk is full 1. Inserted data into the database until the disk was full. (5 clients inserting into 5 different tables in parallel.) 2. Shut down the server 3. Start the database again without freeing any disk space. When I try to start the database again, I get Null-pointer-exception, regardless of how I connect (have tried embedded, client server, ij, jdbc applications). I have not tried to free some space on the disk before starting. The call stack is not available right now (the computer I used had to be shut down due to problems with our cooling system), but the exception comes from the following line in RawStore.java: properties.put(Attribute.LOG_DEVICE, logFactory.getCanonicalLogPath()); getCanonicalLogPath() returns null which results in a NPE in the hash table. A quick debug before the computer was stopped, showed that the logFactory was an instance of org.apache.derby.impl.store.raw.log.ReadOnly which always returns null in its getCanonicalLogPath(). I suspect this may be related to the fact that I ran with the log in a non-default location. bookkeeper benchmark (testclient.java) has compiling errros. bookkeeper benchmark (testclient.java) has compiling errors. Getting live sessions from RequestLogger results in NPE Getting the live sessions from the requestlogger results in a nullpointer exception. Caused by: java.lang.NullPointerException at org.apache.wicket.protocol.http.RequestLogger$SessionData.compareTo(RequestLogger.java:596) at java.util.Arrays.mergeSort(Arrays.java:1144) at java.util.Arrays.mergeSort(Arrays.java:1156) at java.util.Arrays.mergeSort(Arrays.java:1156) at java.util.Arrays.mergeSort(Arrays.java:1156) at java.util.Arrays.mergeSort(Arrays.java:1156) at java.util.Arrays.mergeSort(Arrays.java:1156) at java.util.Arrays.sort(Arrays.java:1079) at org.apache.wicket.protocol.http.RequestLogger.getLiveSessions(RequestLogger.java:163) Regex for Cmd parsing contains an error Mutex class allows for double free in APR pools under certain circumstances The Mutex class in the Decaf library can allow for two copies to be created which both share pointers to APR resources and when the second instance is destroyed a segfault occurs. The code needs to be correct to properly copy itself or prevent copying altogether so that this does not happen. outer join query looses name information The following query: A = LOAD 'student_data' AS (name: chararray, age: int, gpa: float); B = LOAD 'voter_data' AS (name: chararray, age: int, registration: chararray, contributions: float); C = COGROUP A BY name, B BY name; D = FOREACH C GENERATE group, flatten((IsEmpty(A) ? null : A)), flatten((IsEmpty(B) ? null : B)); describe D; E = FOREACH D GENERATE A::gpa, B::contributions; Give the following error: (Even though describe shows correct information: D: {group: chararray,A::name: chararray,A::age: int,A::gpa: float,B::name: chararray,B::age: int,B::registration: chararray,B::contributions: float} java.io.IOException: Invalid alias: A::gpa in {group: chararray,bytearray,bytearray} at org.apache.pig.PigServer.parseQuery(PigServer.java:298) at org.apache.pig.PigServer.registerQuery(PigServer.java:263) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:439) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:249) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:84) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:64) at org.apache.pig.Main.main(Main.java:306) Caused by: org.apache.pig.impl.logicalLayer.parser.ParseException: Invalid alias: A::gpa in {group: chararray,bytearray,bytearray} at org.apache.pig.impl.logicalLayer.parser.QueryParser.AliasFieldOrSpec(QueryParser.java:5930) at org.apache.pig.impl.logicalLayer.parser.QueryParser.ColOrSpec(QueryParser.java:5788) at org.apache.pig.impl.logicalLayer.parser.QueryParser.BaseEvalSpec(QueryParser.java:3974) at org.apache.pig.impl.logicalLayer.parser.QueryParser.UnaryExpr(QueryParser.java:3871) at org.apache.pig.impl.logicalLayer.parser.QueryParser.CastExpr(QueryParser.java:3825) at org.apache.pig.impl.logicalLayer.parser.QueryParser.MultiplicativeExpr(QueryParser.java:3734) at org.apache.pig.impl.logicalLayer.parser.QueryParser.AdditiveExpr(QueryParser.java:3660) at org.apache.pig.impl.logicalLayer.parser.QueryParser.InfixExpr(QueryParser.java:3626) at org.apache.pig.impl.logicalLayer.parser.QueryParser.FlattenedGenerateItem(QueryParser.java:3552) at org.apache.pig.impl.logicalLayer.parser.QueryParser.FlattenedGenerateItemList(QueryParser.java:3462) at org.apache.pig.impl.logicalLayer.parser.QueryParser.GenerateStatement(QueryParser.java:3419) at org.apache.pig.impl.logicalLayer.parser.QueryParser.NestedBlock(QueryParser.java:2894) at org.apache.pig.impl.logicalLayer.parser.QueryParser.ForEachClause(QueryParser.java:2309) at org.apache.pig.impl.logicalLayer.parser.QueryParser.BaseExpr(QueryParser.java:966) at org.apache.pig.impl.logicalLayer.parser.QueryParser.Expr(QueryParser.java:742) at org.apache.pig.impl.logicalLayer.parser.QueryParser.Parse(QueryParser.java:537) at org.apache.pig.impl.logicalLayer.LogicalPlanBuilder.parse(LogicalPlanBuilder.java:60) at org.apache.pig.PigServer.parseQuery(PigServer.java:295) ... 6 more Letting Go Of Retired Processes In the event a process is marked for retirement, a best-effort must be made to release all of its resources from memory. Currently, a retired process stays in memory forever. As a result, the memory footprint of a process rises monotonically with each successive deployment, even if previous versions are retired. The object models that dominate a process' footprint are those of BPEL, which is managed by ODE, and WSDL, which is managed by Axis2. To expedite garbage collection of the BPEL graph, we must try to recursively clear the entire tree that is rooted underneath the OProcess object. To facilitate garbage collection of the WSDL forest, we must release the schema list and definition for every AxisService that the process provides. UNION doesn't work in the latest code grunt> a = load 'tmp/f1' using BinStorage(); grunt> b = load 'tmp/f2' using BinStorage(); grunt> describe a; a: {int,chararray,int,{(int,chararray,chararray)}} grunt> describe b; b: {int,chararray,int,{(int,chararray,chararray)}} grunt> c = union a,b; grunt> describe c; 2009-02-27 11:51:46,012 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1052: Cannot cast bag with schema bag({(int,chararray,chararray)}) to tuple with schema tuple Details at logfile: /homes/amiry/pig_1235735380348.log dump a and dump b work fine. Sample data provided to dev team in an e-mail. Portability problem The Socket class API uses the type socklen_t which is not widely portable. Even more so though it's used in a call where it's only ever supplied as 0 so the whole issue could be avoided by taking it out of the API gridmix2 is not getting compiled to generate gridmix.jar Not able to compile gridmix2 to generate gridmix.jar. Compilation gets failed giving build failed message. It seems that problem is with mapper class and reduce class specified in CombinerJobCreator.java. Changed mapper class from "MapClass.class" to "Mapper.class" and reduce class from "Reduce.class" to "Reducer.class" then it started working and gridmix.jar was generated. ant binary should not compile docs ant binary now compiles docs. The compilation of binary itself takes around 6 minutes. Since the tar ball does not include docs, they need not be compiled. The size of binary is 17MB on my system. I could see duplicate library copies in the tar contents. For example : -rw-rw-r-- / 26202 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/commons-logging-api-1.0.4.jar -rw-rw-r-- / 2532573 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/hadoop-0.21.0-dev-core.jar -rw-rw-r-- / 69850 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/hadoop-0.21.0-dev-tools.jar -rw-rw-r-- / 516429 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/jetty-6.1.14.jar -rw-rw-r-- / 121070 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/junit-3.8.1.jar -rw-rw-r-- / 391834 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/log4j-1.2.15.jar -rw-rw-r-- / 15345 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/slf4j-api-1.4.3.jar -rw-rw-r-- / 15010 2009-01-16 11:53:48 hadoop-0.21.0-dev/contrib/hdfsproxy/lib/xmlenc-0.52.jar ---------------------------------------------- -rw-rw-r-- / 2532573 2009-01-16 11:53:51 hadoop-0.21.0-dev/hadoop-0.21.0-dev-core.jar -rw-rw-r-- / 69850 2009-01-16 11:53:51 hadoop-0.21.0-dev/hadoop-0.21.0-dev-tools.jar -rw-rw-r-- / 516429 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/jetty-6.1.14.jar -rw-rw-r-- / 26202 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/commons-logging-api-1.0.4.jar -rw-rw-r-- / 121070 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/junit-3.8.1.jar -rw-rw-r-- / 391834 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/log4j-1.2.15.jar -rw-rw-r-- / 15345 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/slf4j-api-1.4.3.jar -rw-rw-r-- / 15010 2009-01-16 11:53:34 hadoop-0.21.0-dev/lib/xmlenc-0.52.jar we need cache attachment earlier when use ws-addressing and mtom otherwise the attachment is not completely saved StreamCache causes to many type converters not possible Since StreamCache is default turned on it cause to many convters attempt that fails in MessageSupport. No type converter available to convert from type: java.lang.Integer to the required type: org.apache.camel.StreamCache with value 1 It hurts performance to much. See nabble: http://www.nabble.com/Performance-and-MessageSupport.getBody-%281.6.0%29-td22291841s22882.html Federation bridging sessions get command id sequence out of sync general symptom is something like: invalid-argument: confirmed < (65535+0) but only sent < (65533+0) (qpid/SessionState.cpp:150) seen when running a federation link with acknowledgements turned on for a long period (i.e. lots of messages, exact number depends on the ack frequency selected). NullPointerException thrown by equals method in SpanOrQuery Part of our code utilizes the equals method in SpanOrQuery and, in certain cases (details to follow, if necessary), a NullPointerException gets thrown as a result of the String "field" being null. After applying the following patch, the problem disappeared: Index: src/java/org/apache/lucene/search/spans/SpanOrQuery.java =================================================================== --- src/java/org/apache/lucene/search/spans/SpanOrQuery.java (revision 465065) +++ src/java/org/apache/lucene/search/spans/SpanOrQuery.java (working copy) @@ -121,7 +121,8 @@ final SpanOrQuery that = (SpanOrQuery) o; if (!clauses.equals(that.clauses)) return false; - if (!field.equals(that.field)) return false; + if (field != null && !field.equals(that.field)) return false; + if (field == null && that.field != null) return false; return getBoost() == that.getBoost(); } Roundrobin failover policy does not reset the cursor position after a successful failover. Description: The default roundrobin failover policy does not reset the cursor position after a successful failover. This means that if you failover from A to B you will never failover from B to A anymore (assuming that our list of broker only contains two brokers A and B). Indexer failing after upgrade to Hadoop 0.19.1 After upgrade to Hadoop 0.19.1 Reducer is initialized in a different order than before (see http://svn.apache.org/viewvc?view=rev&revision=736239). IndexingFilters populate current JobConf with field options that are required for IndexerOutputFormat to function properly. However, the filters are instantiated in Reducer.configure(), which is now called after the OutputFormat is initialized, and not before as previously. The workaround for now is to instantiate IndexinigFilters once again inside IndexerOutputFormat. This issue should be revisited before 1.1 in order to find a better solution. See this thread for more information: http://www.lucidimagination.com/search/document/7c62c625c7ea17fe/problem_with_crawling_using_the_latest_1_0_trunk requestConnection and responseConnection in JMSBinding model should be QNames not Strings In the JMSBinding model the requestConnection and responseConnection attributes are treated as Strings. According to the schema they are supposed to be QNames (where the namespace is used to locate the definition document and the local part is the binding name). synapse does not add addressing headers to addresing client response Create a proxy service with the following synapse configuration. <definitions xmlns="http://ws.apache.org/ns/synapse"> <proxy name="StockQuoteProxy"> <target> <inSequence> <header name="wsrm:SequenceAcknowledgement" action="remove" xmlns:wsrm="http://schemas.xmlsoap.org/ws/2005/02/rm"/> <header name="wsrm:Sequence" action="remove" xmlns:wsrm="http://schemas.xmlsoap.org/ws/2005/02/rm"/> <send> <endpoint> <address uri="http://localhost:9001/services/SimpleStockQuoteService"/> </endpoint> </send> </inSequence> <outSequence> <send/> </outSequence> </target> </proxy> </definitions> Note that for proxy service end point addressing is not engaged. Then invoke this proxy service with the following client request. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:To>http://localhost:8281/services/StockQuoteProxy</wsa:To> <wsa:MessageID>urn:uuid:7A24BA37E7FBA1158F1230053211470</wsa:MessageID> <wsa:Action>urn:getQuote</wsa:Action> </soapenv:Header> <soapenv:Body> <m0:getQuote xmlns:m0="http://services.samples"> <m0:request> <m0:symbol>IBM</m0:symbol> </m0:request> </m0:getQuote> </soapenv:Body> </soapenv:Envelope> Then the response message does not have the addressing headers. minor : avoid building error string in verifyReplication() On NameNode, {{verifyReplication()}} is called for every new file created to check if it falls within the configured limits. Currently its implementation always builds the error string though it is almost never needed. This jira fixes it. Built sample plugin by using new tomcat6-deployer cannot be installed on server Recently, when I am building some 2.1 branch samples plugins, I found that built sample plugins are all not installable. A message on admin console like this: "A problem has occured: org.apache.geronimo.kernel.config.LifecycleException: start of org.apache.geronimo.samples/calculator-tomcat/2.1.3-SNAPSHOT/car failed" An exception in stdout/stderr like this: "Caused by: org.apache.geronimo.kernel.config.InvalidConfigException: Unable to resolve reference "Container" in gbean org.apache.geronimo.samples/calculator-tomcat/2.1.3-SNAPSHOT/car?J2EEApplication=null,j2eeType=WebModule,name=org.apache.geronimo.samples/calculator-tomcat/2.1.3-SNAPSHOT to a gbean matching the pattern [?name=WebContainer#] at org.apache.geronimo.kernel.config.ConfigurationUtil.preprocessGBeanData(ConfigurationUtil.java:380) at org.apache.geronimo.kernel.config.ConfigurationUtil.startConfigurationGBeans(ConfigurationUtil.java:438) at org.apache.geronimo.kernel.config.KernelConfigurationManager.start(KernelConfigurationManager.java:188) at org.apache.geronimo.kernel.config.SimpleConfigurationManager.startConfiguration(SimpleConfigurationManager.java:562)" Common compressed image files should be configured to not re-compress when sent to client: GIF, PNG Tapestry's gzip compression logic should know not to try and compress GIF and PNG files are already compressed and should not be re-compressed. Chukwa agent crash on startup with missing check point file When check point file doesn't exist, the agent startup script crashes. Cxf Endpoint String bean properites are not merged CxfEndpointBeanDefinitionParser maintains a property map that can be overridden by user provided property map. They should be merged. Please see the email thread. http://www.nabble.com/camel-cxf---dataformat-tp22332652p22332652.html Service generation needs to make service name capitalized, since Ruby modules need to be constant. service generator needs to capitalize service names, since services are ruby modules and ruby modules are constants. Implicit Enum Values should still be valid. Currently only explicitly set enum values are added to the ValidValues set. This makes thrift interfaces like scribe fail Example: {code} #scribe thrift interface enum ResultCode { OK, TRY_LATER } # generated ruby code module ResultCode OK = 0 TRY_LATER = 1 VALID_VALUES = Set.new([]).freeze end {code} My patch removes the check to see if the value was explicilty set, so all enumerated types get added to the ValidValues set. secureToken in gadgets.js missing an expected value BasicSecurityTokenDecoder now expects tokens to have 7 values. Because gadgets.js default token has only 6 values, the examples in gadgets/files/container/sample*.html fail with a SecurityTokenException. DataNodeCluster should not create blocks with generationStamp == 1 In DataNodeCluster.main(..), injected blocks are created with generationStamp == 1, which is a reserved value but not a valid generation stamp. As a consequence, NameNode may die when those blocks are reported. Cant deploy an ejb web service as a plugin using car-maven-plugin I tried to deploy an ejb web service using the car-maven-plugin. The cxf and axis2 ejb deployers set up the wslink gbean with a reference to the web container. This is set in the plan for the deployers as "WebContainer" but the geronimo-plugin.xml overrides this to "${webContainer}". The jetty and tomcat plugins use a config-substitutions entry to e.g. set webContainer=JettyWebContainer So, this all works on a server where all the plugins are installed. However it doesn't work when using the car-maven-plugin since the config.xml overrides and config-substitutions are not applied. We need to either make the config modifications work for the car-maven-plugin or make sure none of the deployers the car-maven-plugin might use depend on these customizations to work. For instance, we could change all the names to WebContainer. Converters cannot inherit properties from other converters when @JSFConverter is used myfaces builder plugin Trinidad has a small hierarchy of converters (some part on api and some other on impl). On myfaces builder plugin, the intention was to allow converters inherit properties, but on tomahawk there is no example, so it was never tested until now. client using decoupled ws-addressing with async handler hang from time to time If we use decoupled ws-addressing and async invaction handler, client side will hang from time to time. At the same time we can see some error like "Connection reset by peer". The fix is in HTTPConduit, we should cache the InputStream of inMessage before invoke the clientImpl.onMessage, since for async mode, the onMessage is in anonther thead and executing by executor, we can't guarantee the connection still alive when onMessage really invoked. So we need do like {code} InputStream in = inMessage.getContent(InputStream.class); CachedOutputStream cos = new CachedOutputStream(); IOUtils.copy(in, cos); inMessage.setContent(InputStream.class, cos.getInputStream()); incomingObserver.onMessage(inMessage); {code} to cache the inputstream to ensure it's still there when we use it. Buildnumber task does not work for chained resolvers If the resolver attribute of the buildnumber task point to a chained resolver the buildnumber task will not find any existing revision. I will attach a unittest to show the problem. I've debuged into this point: The Buildnubmer task uses SearchEngine.listModules whcih in turn uses Resolver.listTokenValues. The Doc says ListTokenValue must not return values for child resolvers. So litTokenValues can not return values for child resolvers either. I don't know how to fix. Using regular expressions with the @Validate annotation causes odd parse errors if the regexp includes common characters (including commas) Try adding this field to your form: @Validate("regexp=^([a-zA-Z0-9]{2,4})+$") private String somefield; Page will fail to render with exception saying: Render queue error in BeginRender[mypage.somefield]: Failure reading parameter 'validate' of component mypage.somefield: Coercion of ^([a-zA-Z0-9]{2 to type java.util.regex.Pattern (via String --> java.util.regex.Pattern) failed: Unclosed counted closure near index 15 ^([a-zA-Z0-9]{2 ^ ConcurrentScheduleManager.addMyself() has wrong inted This method has the wrong index for the 'size' variable, I think it should b allInstances.size. {code:java} private void addMyself() { synchronized(allInstances) { final int size=0; int upto = 0; for(int i=0;i<size;i++) { final ConcurrentMergeScheduler other = (ConcurrentMergeScheduler) allInstances.get(i); if (!(other.closed && 0 == other.mergeThreadCount())) // Keep this one for now: it still has threads or // may spawn new threads allInstances.set(upto++, other); } allInstances.subList(upto, allInstances.size()).clear(); allInstances.add(this); } } {code} Deploying camel routes involving JBI endpoints in ServiceMix 4 does not work anymore See the {{features/trunk/examples/simple/quartz.xml}} example. This is caused by the modification of the componnt to create a camel context for each SU. Ampersands in attributes not handled properly by Neko HTML parser code Content like: <span title="&amp;lt;">content</span> gets serialized out to: <span title="&lt;">content</span> ... so instead of showing "&lt;" as a tooltip, you'd just get "<". I don't see any security implications on modern browsers, so priority is very low. The fix is to change NekoSimplifiedHtmlParser and NekoSerializer to escape & in attributes to &amp;. The existing code only escapes " to &quot;. System search manager uses a SessionItemStateManager As noted in JCR-2000, the system search manager (responsible for indexing the /jcr:system subtree) uses the SessionItemStateManager instance of the system session instead of the SharedItemStateManager of the underlying default workspace. This can cause a deadlock (see the thread dumps in JCR-2000) when one thread is accessing the LockManager (that also uses the system session) while another thread is persisting versioning changes. See the search-on-sism.patch attachment in JCR-2000 for a fix to this issue. Binary installer does not allow installation with a simple user Binary installer does not allow installation with a simple user. The binary installer ask you a lot of questions: Where you want the server to be installed, Where the instances folder should be placed, The name of the default instance... But it does not ask you where you want to put the 'apacheds' launcher. It's installed by default at: '/var/run/apacheds'. This is an issue when trying to install ApacheDS with a simple user as this user will not have the right to write something at this path. The installer should propose to overwrite the default path for this 'apacheds' launcher. getLocalizedPattern displays currency sign in Linux This is concerning the implementation of getLocalizedPattern provided with the fix for: https://issues.apache.org/jira/browse/TRINIDAD-1374 The currency sign string, "", isn't portable on Linux, so in order to detect and replace the symbol, the implementation was changed to instead use the symbol's unicode value. If a message is sent to a proxy service is GET and if the outputType is SOAP Synapse sends a GET instead of a POST <syn:proxy name="StockQuoteProxy" transports="https http" startOnLoad="true" statistics="disable" trace="disable"> <syn:target> <syn:inSequence> <syn:log level="full"/> <syn:send> <syn:endpoint> <syn:address uri="http://localhost:9001/services/SimpleStockQuoteService" format="soap11"/> </syn:endpoint> </syn:send> </syn:inSequence> <syn:outSequence> <syn:send/> </syn:outSequence> </syn:target> <syn:publishWSDL uri="http://localhost:9000/services/SimpleStockQuoteService?wsdl"/> </syn:proxy> If a message is sent to the above Proxy Service Synapse sends a GET instead of a SOAP message over POST. This works if the following Axis2 property is set though <syn:proxy name="StockQuoteProxy" transports="https http" startOnLoad="true" statistics="disable" trace="disable"> <syn:target> <syn:inSequence> <syn:log level="full"/> <syn:property name="HTTP_METHOD" value="POST" scope="axis2"/> <syn:send> <syn:endpoint> <syn:address uri="http://localhost:9001/services/SimpleStockQuoteService" format="soap11"/> </syn:endpoint> </syn:send> </syn:inSequence> <syn:outSequence> <syn:send/> </syn:outSequence> </syn:target> <syn:publishWSDL uri="http://localhost:9000/services/SimpleStockQuoteService?wsdl"/> </syn:proxy> wrong request/sec in the gui reporting wrong I am seeing lower number of request in the masters gui then I have seen in 0.18.0 while scanning. I thank part of it is we moved to report per sec request not per 3 secs so the request should be 1/3 of the old numbers I was getting. hbase.client.scanner.caching is not the reason the request are under reported. I set hbase.client.scanner.caching = 1 and still get about 2K request a sec in the gui but when the job is done I take records / job time and get 36,324/ records /sec. So there must be some caching out side of the hbase.client.scanner.caching making the request per sec lower then it should be. I know it running faster then reported just thought it might give some new users the wrong impression that request/sec = read/write /sec. ListenerManager Shutdown Hook Attempts to Unregister Itself The ListenerManagerShutdownThread calls listenerManager.stop( ) and the stop( ) method in turns attempts to unregister the shutdown hook causing an IllegalStateException to be thrown. Exception in thread "Thread-2" java.lang.IllegalStateException: Shutdown in progress at java.lang.Shutdown.remove(Shutdown.java:104) at java.lang.Runtime.removeShutdownHook(Runtime.java:218) at org.apache.axis2.engine.ListenerManager.stop(ListenerManager.java:155) at org.apache.axis2.engine.ListenerManager$ListenerManagerShutdownThread.run(ListenerManager.java:258) Sources attach but Javadocs don't It might be a regression of IVYDE-55 in 2.0.0.beta1 Here are the steps. 1. ivy.xml <?xml version="1.0"?> <?xml version="1.0"?> <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="xxx" module="mmm" status="integration" /> <configurations> <conf name="compile" /> <conf name="war" /> <!-- Artifacts to be included in a WAR --> <conf name="ide" extends="compile" description="+ Javadocs and sources" /> </configurations> <dependencies defaultconf="compile,war->default;ide->ide(default)"> <dependency org="apache" name="commons-logging" rev="1.1"/> </dependencies> </ivy-module> 2. Repository files: {repository}\apache\commons-logging\commons-logging-1.1\commons-logging-1.1-doc.zip {repository}\apache\commons-logging\commons-logging-1.1\commons-logging-1.1-src.zip {repository}\apache\commons-logging\commons-logging-1.1\commons-logging-1.1.jar 3. commons-logging-1.1-ivy.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="http://ivyrep.jayasoft.org/ivy-doc.xsl"?> <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="apache" module="commons-logging" revision="1.1" status="release"/> <configurations> <conf name="default"/> <conf name="ide" extends="default" description="+ Javadocs and sources"/> </configurations> <publications> <artifact name="commons-logging-1.1" type="jar" conf="default"/> <artifact name="commons-logging-1.1-src" type="source" ext="zip" conf="ide"/> <artifact name="commons-logging-1.1-doc" type="javadoc" ext="zip" conf="ide"/> </publications> </ivy-module> 4. Add IvyDE library with "ide" configuration only I get commons-logging-1.1.jar with an attached source, but not Javadocs. ConcurrentModificationException Exception in SessionNavigationalState This issue can only be reproduced if a user very rapidly interacts with the same portal page without waiting for the previous interaction to be completed. Branch in repository pattern and defaultBranch Let's suppose that you have branch in the repository pattern, as in "[organisation]/[module]/[branch]/[revision]/[type]s/[artifact].[ext]", and you have also set defaultBranch in ivysettings like this: <settings defaultBranch="trunk"/> If you have a dependency without a branch tag and a static revision, then IvyDE seems to insert an empty string as branch in the repository pattern while doing resolve which causes an error. For instance, if you have a dependency such as this: <dependency org="myorg" name="mymod" rev="10.2"/> Then IvyDE tries to find it in "myorg/mymod//10.2". I think it should try to find it in "myorg/mymod/trunk/10.2" instead, which is the way the normal Ivy resolve from an Ant script works. deploy error after using login to save the name and password 1. start geronimo server 2. remove file $HOME/.geronimo-deployer 3. run gsh to enter gshell mode 4. run deploy/login -u username -w password to save the username and password 5. run deploy/connect 6. run deploy/list-modules output: ERROR IllegalStateException:Disconnected if run deploy/list-plugins, output will be ERROR NullPointException:null Tapestry.ScriptManager.contains throws error if <script> tag in <head> has no href Tapestry.ScriptManager.contains firsts lines are: return $A(collection).any(function (element) { var existing = element[prop]; if (existing.blank()) return false; If element doesn't have the prop, then the existing var is null and existing.blank() throws an exception because the call of a method to a null variable. This problem happens for example (because of this i detected it) when i want to update a zone with a block that also loads stylesheets and scripts and, in the original page, there are script and or style tags embedded (<style> without href or <script> without src). In the loop looking for an existing script or style tag that had already loaded the new script or tag in the ajax response, the existing variable in the code is null and the error triggers. The solution is very simple, i think: if (existing == null || existing.blank()) return false; Regards. XML name space mismatch in Synapse sample 2 ,4 and 7 Hi , while running synapse samples i have found out that there is a xml namespace mismatch in synapse samples 2 ,4 and 7 following is the body of the soap message that i was able to trace while sending the getquote request using the axis2client that is bundled in synapse. <soapenv:Body> <m0:getQuote xmlns:m0="http://services.samples"> <m0:request> <m0:symbol>IBM</m0:symbol> </m0:request> </m0:getQuote> </soapenv:Body> but in sample 2 4 and 7 are configured to accept a message with a different namespace so they does not work properly (even though the sample 7 seems running perfectly the with the incomming 'Invalid custom quote request' axis fault that fault is generated due to the xmlns mismatch.you can check this by changing stocksymbol --> symbol where you get the same message) thank you, Charith Synapse.war not contain synapse .mar file in where should i put a patch ?is synapse NIGHTLY version? Synapse.war file does not contain a synapse-SNAPSHOT .mar file in "synapse.home/WEB-INF/repository/modules/" directory. Because of that synapse cant be started in a web container. MTOM not working Dain, You've done a fix regarding that topic http://www.nabble.com/WS-and-MTOM-td19372229.html I checked again, and it seems like the bug as been fixed only for WebModules. Can you fix it for EjbModules ? Balancer sometimes runs out of memory after days or weeks running The culprit is a HashMap called MovedBlocks. By design this map does not get cleaned up between iterations. This is because the deletion of source replicas is done by NN. When next iteration starts, source replicas may not have been deleted, Balancer does not want to schedule them to move again. To prevent running out of memory, Balancer should expire/clean the movedBlocks from some iterations back. Error in Serializing the fault mediator fault code with a QName having default prefix When I set the FaultCode vaule as given below, faultMediator.setFaultCodeValue(new QName("http://www.w3.org/2003/05/soap-envelope","Reciver")); it is serialized as <syn:makefault> <syn:code xmlns:axis2ns2="http://www.w3.org/2003/05/soap-envelope" value=":Receiver" /> </syn:makefault> The problem is when the QName is having the default prefix which is "". When I set the fault code with a QName with some other prefix faultMediator.setFaultCodeValue(new QName("http://www.w3.org/2003/05/soap-envelope","Reciver","myPrefix")); it is serialized correctly. Deadlock on concurrent commits As reported in the followup to JCR-1979, there's a case where two transactions may be concurrently inside a commit. This is bad as it breaks the main assumption in http://jackrabbit.apache.org/concurrency-control.html about all transactions first acquiring the versioning write lock. Looking deeper into this I find that the versioning write lock is only acquired if the transaction being committed contains versioning operations. This is incorrect as all transactions in any case need to access the version store when checking for references. Addressing should be engaged to response if the request comes with addressing For the moment if you send a request to synapse with addressing (when the reply is *not* redirected) synapse does not sends the addressing headers in the response message back to the client. This should be fixed to send addressing headers to the client if the request contains addressing headers. vclreload account has invalid curiculumid The curriculumid for the vclreload user is set to 3. There is no entry in the curiculum table with this id. Chukwa Tests should not write to /tmp From http://wiki.apache.org/hadoop/HowToContribute New unit tests should be provided [...] By default, do not let tests write any temporary files to /tmp. Instead, the tests should write to the location specified by the test.build.data system property. chukwaAgent.agent.control.port is going away Conf option chukwaAgent.agent.control.port has been renamed; ought to vanish. TSocket.peek fails on FreeBSD POSIX says what recv(2) should returns 0 if peer has performed a shutdown. This feature uses in TBufferedTransport {code} bool peek() { if (rBase_ == rBound_) { setReadBuffer(rBuf_.get(), transport_->read(rBuf_.get(), rBufSize_)); } return (rBound_ > rBase_); } {code} The decision works fine on linux, but fails on freebsd. In freebsd, recv returns -1 and errno==ECONNRESET. Replication exchange does not verify existence of queue before attempting to enqueue to (or deqeue from) it Sign before encrypt throws a ClassCastException The UX ws-sec 11 test fails in the server side with the following exception INFO: Interceptor has thrown exception, unwinding now org.apache.cxf.interceptor.Fault: java.util.ArrayList at org.apache.cxf.ws.security.wss4j.policyhandlers.SymmetricBindingHandler.doSignBeforeEncrypt(SymmetricBindingHandler.java:380) at org.apache.cxf.ws.security.wss4j.policyhandlers.SymmetricBindingHandler.handleBinding(SymmetricBindingHandler.java:113) at org.apache.cxf.ws.security.wss4j.PolicyBasedWSS4JOutInterceptor$PolicyBasedWSS4JOutInterceptorInternal.handleMessage(PolicyBasedWSS4JOutInterceptor.java:131) at org.apache.cxf.ws.security.wss4j.PolicyBasedWSS4JOutInterceptor$PolicyBasedWSS4JOutInterceptorInternal.handleMessage(PolicyBasedWSS4JOutInterceptor.java:1) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.interceptor.OutgoingChainInterceptor.handleMessage(OutgoingChainInterceptor.java:74) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:89) at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.serviceRequest(JettyHTTPDestination.java:302) at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:265) at org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:70) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:206) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:729) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) Caused by: java.lang.ClassCastException: java.util.ArrayList at org.apache.cxf.ws.security.wss4j.policyhandlers.AbstractBindingBuilder.assertSupportingTokens(AbstractBindingBuilder.java:370) at org.apache.cxf.ws.security.wss4j.policyhandlers.AbstractBindingBuilder.assertSupportingTokens(AbstractBindingBuilder.java:1343) at org.apache.cxf.ws.security.wss4j.policyhandlers.SymmetricBindingHandler.doSignBeforeEncrypt(SymmetricBindingHandler.java:331) ... 21 more SubscriptionManager::get() closes dispatch queue unexpectedly If SubscriptionManager::get() is invoked with no subscriptions in place, the dispatch queue is closed and a subsequent SubscriptionManager::run() returns without dispatching. autoreconf fails for /zookeeper-3.0.1/src/c/ > autoreconf -i -f -v autoreconf-2.63: Entering directory `.' autoreconf-2.63: configure.ac: not using Gettext autoreconf-2.63: running: aclocal --force configure.ac:21: error: AC_SUBST: `DX_FLAG_[]DX_CURRENT_FEATURE' is not a valid shell variable name acinclude.m4:77: DX_REQUIRE_PROG is expanded from... acinclude.m4:117: DX_ARG_ABLE is expanded from... acinclude.m4:178: DX_INIT_DOXYGEN is expanded from... configure.ac:21: the top level autom4te-2.63: /usr/bin/m4 failed with exit status: 1 aclocal-1.10: autom4te failed with exit status: 1 autoreconf-2.63: aclocal failed with exit status: 1 > Ivy doesn't handle the classifier attribute of artifacts inside dependency elements It appears that the translation from a Maven POM to an Ivy file is placing the m:classifier attribute on the artifact element rather than the dependency element, where ivy handles it. To reproduce: {code}java -jar <path-to>/ivy-2.0.0.jar -settings mule-core-ivy-settings.xml -dependency org.mule mule-core 2.2.0{code} The mule-core-ivy-settings.xml file is: {code}<?xml version="1.0" encoding="utf-8"?> <ivysettings> <settings defaultResolver="downloadGrapes" /> <resolvers> <chain name="downloadGrapes"> <ibiblio name="codehaus" root="http://repository.codehaus.org/" m2compatible="true" /> <ibiblio name="ibiblio" m2compatible="true" /> <ibiblio name="java.net2" root="http://download.java.net/maven/2/" m2compatible="true" /> <ibiblio name="mule-osgi-deps" root="http://dist.codehaus.org/mule/dependencies/maven2" m2compatible="true"/> </chain> </resolvers> </ivysettings>{code} The relevant error output isis... {code}:: problems summary :: :::: WARNINGS module not found: org.safehaus.jug#jug;2.0.0-osgi ==== codehaus: tried http://repository.codehaus.org/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.pom -- artifact org.safehaus.jug#jug;2.0.0-osgi!jug.jar: http://repository.codehaus.org/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.jar ==== ibiblio: tried http://repo1.maven.org/maven2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.pom -- artifact org.safehaus.jug#jug;2.0.0-osgi!jug.jar: http://repo1.maven.org/maven2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.jar ==== java.net2: tried http://download.java.net/maven/2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.pom -- artifact org.safehaus.jug#jug;2.0.0-osgi!jug.jar: http://download.java.net/maven/2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.jar ==== mule-osgi-deps: tried http://dist.codehaus.org/mule/dependencies/maven2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.pom -- artifact org.safehaus.jug#jug;2.0.0-osgi!jug.jar: http://dist.codehaus.org/mule/dependencies/maven2/org/safehaus/jug/jug/2.0.0-osgi/jug-2.0.0-osgi.jar :::::::::::::::::::::::::::::::::::::::::::::: :: UNRESOLVED DEPENDENCIES :: :::::::::::::::::::::::::::::::::::::::::::::: :: org.safehaus.jug#jug;2.0.0-osgi: not found ::::::::::::::::::::::::::::::::::::::::::::::{code} In the generated org.mule/mule-core/ivy-2.2.0.xml the following line is generated as {code}<dependency org="org.safehaus.jug" name="jug" rev="2.0.0-osgi" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"> <artifact name="jug" type="jar" ext="jar" conf="" m:classifier="asl"/> </dependency>{code} but it works if {{m:classifier}} is move to the {{dependency}} element: {code}<dependency org="org.safehaus.jug" name="jug" rev="2.0.0-osgi" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)" m:classifier="asl"> <artifact name="jug" type="jar" ext="jar" conf=""/> </dependency>{code} Wrong serialisation order of elements when using Aegis databinding. In my current case I hava * an abstract Java class "AbstractDatabaseObject" defining a property "id". * an abstract Java class "Credential" which extends "AbstractDatabaseObject" and defines a property "name" * a concrete Java class "UsernamePasswordCredential" which extends "Credential" and defines properties "username" and "password" All schemas are created to my greates satisfaction (Great Job!). Unfortunately the serialisation order when using Aegis databinding (don't know if its the same using others) is the wrong way around. Instead of getting: <cred> <id>306ce816-01b7-11de-8d92-8d4df6b73eb1</id> <name>dsfgsdfgs</name> <password>dsfsfdg</password> <username>sdsdfgsdfg</username> </cred> I get: <cred> <password>dsfsfdg</password> <username>sdsdfgsdfg</username> <name>dsfgsdfgs</name> <id>306ce816-01b7-11de-8d92-8d4df6b73eb1</id> </cred> Which my Flex client complains about. I'll try to whip up a patch fixing this issue today. remove locking in zk_hashtable.c or add locking in collect_keys() From a review of zk_hashtable.c it appears to me that all functions which manipulate the hashtables are called from the IO thread, and therefore any need for locking is obviated. If I'm wrong about that, then I think at a minimum collect_keys() should acquire a lock in the same manner as collect_session_watchers(). Both iterate over hashtable contents (in the latter case using copy_table()). However, from what I can see, the only function (besides the init/destroy functions used when creating a zhandle_t) called from the completion thread is deliverWatchers(), which simply iterates over a "delivery" list created from the hashtables by collectWatchers(). The activateWatcher() function contains comments which describe it being called by the completion thread, but in fact it is called by the IO thread in zookeeper_process(). I believe all calls to collectWatchers(), activateWatcher(), and collect_keys() are made by the IO thread in zookeeper_interest(), zookeeper_process(), check_events(), send_set_watches(), and handle_error(). Note that queue_session_event() is aliased as PROCESS_SESSION_EVENT, but appears only in handle_error() and check_events(). Also note that handle_error() is called only in zookeeper_process() and handle_socket_error_msg(), which is used only by the IO thread, so far as I can see. chukwa alert configuration should be loaded from CHUKWA_CONF_DIR chukwa-daemon.sh is expecting alert.conf from CHUKWA_HOME/conf/alert.conf, but this should be changed to CHUKWA_CONF_DIR. This change will satisfy the recent changes of chukwa config rpm. Empty action not applied workaround for AXIS2-4264 hadoop-daemon isn't compatible after HADOOP-4868 The CLI changed for hadoop-daemon.sh in an incompatible way. It now requires the sub-system name in the CLI. input buffer reading in the REST interface does not correctly clear the character buffer each iteration when reading the input buffer in the REST interface the character buffer is not cleared for each iteration of the loop. This can cause malformed data to be read from the input stream in cases where the input is greater than 640 characters. See lines numbered 366-376 in org.apache.hadoop.hbase.rest.Dispatcher.java I have prepared a patch for this. hfile meta block handling bugs HFile doesn't handle 'get meta block' when there are no meta blocks. It throws an unhelpful exception "meta index not loaded", which is not the case. No meta blocks = no meta index. It should return null instead. Additionally, hfile doesn't even get all meta names properly, due to the incorrect use of the file's comparator, instead of using just a bytes comparator in the index. This is manifested by NPEs in some tests. Reduce tasks are stuck waiting for map outputs when none are in progress When JT is restarted several times, a situation is encountered when the reduce tasks are stuck forever waiting for map outputs. However 100%map is complete and none of the map tasks are in progress. The reduce tasks wait infinitely. Camel Spring configuration doen't support to scan the SpringRouteBuilder Here is the mail thread which talks about it. http://www.nabble.com/Error%3A-This-SpringBuilder-is-not-being-used-with-a-SpringCamelContext-and-there-is-no-applicationContext-property-configured-to22326547s22882.html commandButton, panelAccordion and processChoiceBar have limited functionality in Non-JavaScript mobile browsers. Currently, we don't support the following in Non-JavaScript mobile browsers. 1) icon attribute of the <tr:commandButton>, 2) discloseNone attribute of <tr:panelAccordion > 3) drop-down of <tr:processChoiceBar> Can not load the QueueBrowserStrategy in OSGI environment Here is the mail thread which discusses about it. http://www.nabble.com/Classloading-and-OSGI-to22303475.html#a22303475 Schema Browser view is not bring to front when using the 'Open Schema Browser' menu item while the view is already opened but not the frontmost view Schema Browser view is not bring to front when using the 'Open Schema Browser' menu item while the view is already opened but not the frontmost view Here's how to reproduce the bug: - Connect to an LDAP connection - Open the Schema Browser - Open an entry in the Entry Editor. - Let the Entry Editor as frontmost window - Right-click on the connection and select "Open Schema Browser". The Schema Browser view is not bring to front. Verification/Decryption failure with a DN String from a different provider The fix for WSS-86: https://issues.apache.org/jira/browse/WSS-86 introduced another problem. If BouncyCastle is used to load a cert, the X509Certificate object then has a DN with the components reversed, compared to Sun's X509Certificate implementation, and this is causing problems on the processing side. The resolve in workspace is being evicted by transitive dependencies The resolve process of some Eclipse project A depending on some other project B via some "latest" dependency in the ivy.xml works correctly. But as soon as one of the dependencies of A has a dependency on a released version of B (in a classical repository), that released version is considered higher than the version in the Eclipse workspace. And then the Eclipse dependency get evicted. We then should to have a latest strategy that consider "latest" or "working@" greater than non "latest" version. Problem with IndexWriter.mergeFinish I'm getting a (very) infrequent assert in IndexWriter.mergeFinish from TestIndexWriter.testAddIndexOnDiskFull. The problem occurs during the rollback when the merge hasn't been registered. I'm not 100% sure this is the correct fix, because it's such an infrequent event. {code:java} final synchronized void mergeFinish(MergePolicy.OneMerge merge) throws IOException { // Optimize, addIndexes or finishMerges may be waiting // on merges to finish. notifyAll(); if (merge.increfDone) decrefMergeSegments(merge); assert merge.registerDone; final SegmentInfos sourceSegments = merge.segments; final int end = sourceSegments.size(); for(int i=0;i<end;i++) mergingSegments.remove(sourceSegments.info(i)); mergingSegments.remove(merge.info); merge.registerDone = false; } {code} Should be something like: {code:java} final synchronized void mergeFinish(MergePolicy.OneMerge merge) throws IOException { // Optimize, addIndexes or finishMerges may be waiting // on merges to finish. notifyAll(); if (merge.increfDone) decrefMergeSegments(merge); if (merge.registerDone) { final SegmentInfos sourceSegments = merge.segments; final int end = sourceSegments.size(); for(int i=0;i<end;i++) mergingSegments.remove(sourceSegments.info(i)); mergingSegments.remove(merge.info); merge.registerDone = false; } } {code} JobInProgress.obtainTaskCleanupTask() throws an ArrayIndexOutOfBoundsException MMC contains hardcoded paths; host, port and context where the MMC expects Ode should be derived from the deployment Host, port and context are currently hard coded. This leads to the effect that the MMC stops working when ODE is deployed in a different context name than "ode" or tomcat has setup for a different port. In addition, hard-coded paths cause some troubles when the console shall be accessed from other machines due to some XSS protections in browsers. I think the best way to fix this is to use relative URLs as this should solve all the problems stated above. Improve logging uses when running tests Actually, the build displays a lot of (unuseful) logging messages, see [1]. It will be great to configure correctly JUL for the tests, so developers don't think that some exceptions are thrown [1] http://hudson.zones.apache.org/hudson/view/Shindig/job/Shindig/231/consoleText Management object IDs out of sync on cluster Due to update session not using id from exclusive range (which in turn was a regression from a lost update as a result of dump->update renaming r737971). ESB hangs on exit when destroyApplicationContextOnShutdown is set to true and the broker's persistent storage fails When you configure activemq to use a JDBC persistent store such as mysql and bounce or shut down the database servicemix will hang before exiting if you have systemExitOnShutdown set to false and have destroyApplicationContextOnShutdown set to true. While you can set systemExitOnShutdown to true to get around this problem, this work around cannot be used when servicemix is embedded in a container such as tomcat or jboss, as the call to system.exit() would bring down the whole application server. This issue appears to have been introduced post 3.2.3. I'll attach the configuration used to reproduce this issue, along with a patch that fixes this test case. To reproduce the issue just use my configuration, start up servicemix and then bounce the database. Without the patch in place servicemix won't exit properly. New Axis2 rs spec import Axis 2 dependencies have changed, adding <dependency> <groupId>javax.ws.rs</groupId> <artifactId>jsr311-api</artifactId> <type>jar</type> </dependency> I'm hoping Jarek can figure out if we want/need to include this. Several UI Problems with IE There are several UI problems with IE: - Buttons in lists (bundles, components) do not work - Buttons are not displayed next to each other, but beneath each other - Save of configurations does not work properly Informal parameters have started to overwrite previously rendered attributes Since Tapestry3 days I had the habbit that my action links contain a href="#" attribute. That worked until recently. With one of the 5.1 SNAPSHOTS it stopped working. The links are now simply dead. shutdown with incorrect permission on log files shows java.lang.NullPointerException at org.apache.derby.impl.store.raw.log.LogToFile.flush(LogToFile.java:3964). Should give bettter message. I recently saw case where a user was seeing the following error in the derby.log when trying to shutdown their database. New exception raised during cleanup null java.lang.NullPointerException at org.apache.derby.impl.store.raw.log.LogToFile.flush(LogToFile.java:3964) at org.apache.derby.impl.store.raw.log.LogToFile.flush(LogToFile.java:1781) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.flush(BaseDataFileFa at org.apache.derby.impl.store.raw.data.CachedPage.writePage(CachedPage.java:761 at org.apache.derby.impl.store.raw.data.CachedPage.clean(CachedPage.java:610) at org.apache.derby.impl.services.cache.ConcurrentCache.cleanAndUnkeepEntry(Conc at org.apache.derby.impl.services.cache.ConcurrentCache.cleanCache(ConcurrentCac at org.apache.derby.impl.services.cache.ConcurrentCache.cleanAll(ConcurrentCache at org.apache.derby.impl.services.cache.ConcurrentCache.shutdown(ConcurrentCache at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(BaseDataFileFac at org.apache.derby.impl.services.monitor.TopService.stop(TopService.java:405) at org.apache.derby.impl.services.monitor.TopService.shutdown(TopService.java:34 at org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(BaseMonitor.java: at org.apache.derby.impl.db.DatabaseContextImpl.cleanupOnError(DatabaseContextIm at org.apache.derby.iapi.services.context.ContextManager.cleanupOnError(ContextM at org.apache.derby.impl.jdbc.TransactionResourceImpl.cleanupOnError(Transaction at org.apache.derby.impl.jdbc.EmbedConnection.<init>(EmbedConnection.java:584) at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Driver40.java:68) at org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:238) at org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:119) at java.sql.DriverManager.getConnection(DriverManager.java:316) at java.sql.DriverManager.getConnection(DriverManager.java:273) It ended up that some of the log files did not have proper write permissions because some operation on the database had been performed by root. They had subsequently deleted their db.lck file so the database did not boot READ ONLY as it would if the root owned db.lck file still existed and the symptom was that they got this error on shutdown. Clearly this was user error, but it would have been good if we gave a better error message. To reproduce on Linux: As a user with umask 0022, run the program java MakeDB this will make the databases wombat and create a table. su root with umask 0022, run the program to insert data and remove the db.lck file: java InsertALot rm wombat/db.lck go back to the original user run the program: java ConnectAndShutdown The application gets the normal shutdown exception but if you look in derby.log you will see the exception. java.lang.NullPointerException at org.apache.derby.impl.store.raw.log.LogToFile.flush(LogToFile.java:3964) ... I will attach the files. ws-sec 10 server interop tests failing The ws sec 10 tests haven't been setup correctly. They need more configuration to function correctly. metrics aggregation is incorrect in database A few problem with the aggregation SQL statements: hdfs throughput should be calculated by doing two level aggregation: First, calculate the rate for hadoop datanode metrics with accumulated vales. Second, sum up all datanode rate to provide a single number to represent the current cluster performance. Disable hod jobs utilization measurement - The data provide a rough view of the cluster performance but mostly inaccurate. Disable user utilization measurement generated from hod job - The data is generated from hod job metrics, and it's mostly inaccurate. PolicyContext handler data objects are never released PolicyContext.setHandlerData() sets a given object on the thread. In Geronimo Jetty and Tomcat code this is called to set the HttpServletRequest object as the policy handler data. The problem is, that there is no call to unset the handler data object from the thread. That causes the HttpServletRequest objects (and its references) to stay in memory longer then necessary. Fix definition of javax.persistence.query.timeout property This was originally reported by Pinaki in OPENJPA-849. It is being moved to this new JIRA. Here's Pinaki's original comment: queryTimeout.setLoadKey("javax.persistence.query.timeout"); queryTimeout.setDefault("-1"); queryTimeout.set(-1); queryTimeout.setDynamic(true); does not seem kosher for the following reason: 1. loadKey is the key with which a property is loaded from configuration artifacts. At this point of execution, no property has been *actually* loaded, they are merely being declared to exist. Hence we should not be setting load key. 2. configuration declares a Value. But does not assign its value. So setting its value to -1 does not look alright. Setting default value is OK. These issues gain significance in the light of the fact the configuration's hashcode is the key to a factory in JNDI. And computation of hashcode depends on the actual value of the Values. As an extreme example, assume two Configuration C1 and C2 nearly identical but differs *only* in their query.timeout value. The requirement is hash code for C1 and C2 must not be equal. And that is what Configuration.hashCode() ensures. But, because we are setting query timeout to -1 (that is not what the user's p.xml sets) and it is marked as dynamic, in both cases Configuration hashcode will treat query.timeout value to be -1 and will end up computing same hashcode for C1 and C2. Maven files in jempbox do not work in Eclipse. When I tried to use the Maven files in JempBox, I got errors in the test code that it could not find the junit packages. Changing the pom file to better specify the test and source directories fixed it. --- jempbox/trunk/pom.xml (revision 723007) +++ jempbox/trunk/pom.xml (working copy) @@ -36,7 +36,8 @@ <description>JempBox is an open source Java library that implements Adobe's XMP(TM) specification.</description> <build> - <sourceDirectory>src</sourceDirectory> + <sourceDirectory>src/org</sourceDirectory> + <testSourceDirectory>src/test</testSourceDirectory> </build> <dependencies> No error message displays in console when fail to create jms resource 1.Create a console.jms/org.ibm.samples/1.0/rar resouce from admin console 2.Receate a console.jms/org.ibm.samples/1.0/rar resource , but no error in conosle but error message in geronimo.out :Deployer operation failed: Module console.jms/org.ibm.samples/1.0/rar already exists in the server. Try to undeploy it first or use the redeploy command. 3.So no error message when fail to create a jms resource. Persistence Exception is not visible/lost for client. I am trying an insert on a table. The Entity class is wrong annotated, one column was renamed in the table. Then the following situation occurs. The call to persist(entity) is successfully, no exception is thrown. On leaving the ejb container and returning to tomact a commit is performed (it's a managed datasource, so container performs commit). This leads to the insert on database. This insert fails, a rollback is performed. On return to the JSF bean no exception can be seen by the bean. In the same class i have got a query method. If i replace the call to persist with the call to the query method everything works ok. The exception is thrown and is visible at the client site. This is the geronimo console output. The last line comes from the JSB bean which reports a successful insert. 11:58:04,390 WARN [Transaction] Unexpected exception from beforeCompletion; transaction will roll back <openjpa-1.0.1-r420667:592145 fatal general error> org.apache.openjpa.persistence.PersistenceException: The transaction has been rolled back. See the nested exceptions for details on the errors that occurred. at org.apache.openjpa.kernel.BrokerImpl.newFlushException(BrokerImpl.java:2107) at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:1954) at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:1852) at org.apache.openjpa.kernel.BrokerImpl.beforeCompletion(BrokerImpl.java:1770) at org.apache.geronimo.transaction.manager.TransactionImpl.beforeCompletion(TransactionImpl.java:514) at org.apache.geronimo.transaction.manager.TransactionImpl.beforeCompletion(TransactionImpl.java:499) at org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:400) at org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:257) at org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:245) at org.apache.openejb.core.transaction.TransactionPolicy.commitTransaction(TransactionPolicy.java:141) at org.apache.openejb.core.transaction.TxRequired.afterInvoke(TxRequired.java:75) at org.apache.openejb.core.stateless.StatelessContainer._invoke(StatelessContainer.java:233) at org.apache.openejb.core.stateless.StatelessContainer._invoke(StatelessContainer.java:188) at org.apache.openejb.core.stateless.StatelessContainer.invoke(StatelessContainer.java:165) at org.apache.openejb.core.ivm.EjbObjectProxyHandler.businessMethod(EjbObjectProxyHandler.java:217) at org.apache.openejb.core.ivm.EjbObjectProxyHandler._invoke(EjbObjectProxyHandler.java:77) at org.apache.openejb.core.ivm.BaseEjbProxyHandler.invoke(BaseEjbProxyHandler.java:321) at org.apache.openejb.util.proxy.Jdk13InvocationHandler.invoke(Jdk13InvocationHandler.java:49) at $Proxy75.anlegenBenutzer(Unknown Source) at de.nrw.hagen.ggrz.benutzer.controler.BenutzerControler.anlegenBenutzer(BenutzerControler.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.el.parser.AstValue.invoke(AstValue.java:131) at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) at org.apache.jasper.el.JspMethodExpression.invoke(JspMethodExpression.java:68) at javax.faces.component._MethodExpressionToMethodBinding.invoke(_MethodExpressionToMethodBinding.java:75) at org.apache.myfaces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:54) at javax.faces.component.UICommand.broadcast(UICommand.java:121) at javax.faces.component.UIViewRoot._broadcastForPhase(UIViewRoot.java:292) at javax.faces.component.UIViewRoot.process(UIViewRoot.java:209) at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:117) at org.apache.myfaces.lifecycle.InvokeApplicationExecutor.execute(InvokeApplicationExecutor.java:32) at org.apache.myfaces.lifecycle.LifecycleImpl.executePhase(LifecycleImpl.java:103) at org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:76) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:148) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.geronimo.tomcat.valve.DefaultSubjectValve.invoke(DefaultSubjectValve.java:56) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.apache.geronimo.tomcat.GeronimoStandardContext$SystemMethodValve.invoke(GeronimoStandardContext.java:396) at org.apache.geronimo.tomcat.valve.GeronimoBeforeAfterValve.invoke(GeronimoBeforeAfterValve.java:47) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:563) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:595) Caused by: <openjpa-1.0.1-r420667:592145 nonfatal general error> org.apache.openjpa.persistence.PersistenceException: FEHLER: Spalte letzte_benutzer_gruppe von Relation benutzer existiert nicht {prepstmnt 17230170 INSERT INTO vesuv.benutzer (id, anzahl_anmeldeversuche, anzahl_anmeldungen, benutzer_kennung, datum_letzte_passwort_aenderung, email_anlage, historie_fk, ist_gesperrt, ist_gesperrt_seit, kostenbefreiung_online_auskunft, letzte_benutzer_gruppe, letzter_anmeldeversuch, passwort_fehlversuche_zaehler, passwort_historie, passwort_sha256hash, passwort_wechsel_erst_anmeldung, person_info, sperrgrund, verknuepft_mit, zuletzt_angemeldet_am) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 31, (long) 0, (long) 0, (String) a, (Timestamp) 1970-01-01 01:00:00.0, (boolean) false, (long) 0, (boolean) false, (Timestamp) 1970-01-01 01:00:00.0, (boolean) false, (long) 0, (Timestamp) 3908-03-21 10:22:00.0, (long) 0, (String) nixx, (String) b, (boolean) false, (long) 0, (String) keiner, (long) 1, (Timestamp) 3908-03-21 10:22:00.0]} [code=0, state=42703] FailedObject: de.nrw.hagen.ggrz.bv.benutzer.db.BenutzerPAO@8b394 at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:3938) at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:97) at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:67) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushInternal(PreparedStatementManagerImpl.java:108) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flush(PreparedStatementManagerImpl.java:73) at org.apache.openjpa.jdbc.kernel.OperationOrderUpdateManager.flushPrimaryRow(OperationOrderUpdateManager.java:203) at org.apache.openjpa.jdbc.kernel.OperationOrderUpdateManager.flush(OperationOrderUpdateManager.java:89) at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:89) at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:72) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.flush(JDBCStoreManager.java:514) at org.apache.openjpa.kernel.DelegatingStoreManager.flush(DelegatingStoreManager.java:130) ... 53 more Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: FEHLER: Spalte letzte_benutzer_gruppe von Relation benutzer existiert nicht {prepstmnt 17230170 INSERT INTO vesuv.benutzer (id, anzahl_anmeldeversuche, anzahl_anmeldungen, benutzer_kennung, datum_letzte_passwort_aenderung, email_anlage, historie_fk, ist_gesperrt, ist_gesperrt_seit, kostenbefreiung_online_auskunft, letzte_benutzer_gruppe, letzter_anmeldeversuch, passwort_fehlversuche_zaehler, passwort_historie, passwort_sha256hash, passwort_wechsel_erst_anmeldung, person_info, sperrgrund, verknuepft_mit, zuletzt_angemeldet_am) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(long) 31, (long) 0, (long) 0, (String) a, (Timestamp) 1970-01-01 01:00:00.0, (boolean) false, (long) 0, (boolean) false, (Timestamp) 1970-01-01 01:00:00.0, (boolean) false, (long) 0, (Timestamp) 3908-03-21 10:22:00.0, (long) 0, (String) nixx, (String) b, (boolean) false, (long) 0, (String) keiner, (long) 1, (Timestamp) 3908-03-21 10:22:00.0]} [code=0, state=42703] at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:192) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$800(LoggingConnectionDecorator.java:57) at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeUpdate(LoggingConnectionDecorator.java:858) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:269) at org.apache.openjpa.lib.jdbc.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:269) at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement.executeUpdate(JDBCStoreManager.java:1363) at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushInternal(PreparedStatementManagerImpl.java:97) ... 60 more [de.nrw.hagen.ggrz.benutzer.controler.BenutzerControler] >> $$Success from neuer Benutzer = true Missing prefix declarations in literal xml causes NAMESPACE_ERR: An attempt is made to create... Hey guys, I commented on https://issues.apache.org/jira/browse/ODE-536 which made no sense at all. I created a copy activity below: <bpws:copy> <bpws:from> <bpws:literal> <supercalifragilisticexpialidocious> <complexStuff xmlns="http://ode/bpel/unit-test-diff"> <mytext>Initialised-supercalifragilisticexpialidocious</mytext> </complexStuff> </supercalifragilisticexpialidocious> </bpws:literal> </bpws:from> <bpws:to variable="supercalifragilisticexpialidocious"/> </bpws:copy> Note that the xml in the literal has no xml prefix declarations but it still valid. This seems to cause the following exception: ERROR - GeronimoLog.error(108) | Error while executing transaction org.apache.ode.bpel.iapi.Scheduler$JobProcessorException: java.lang.RuntimeException: org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or c hange an object in a way which is incorrect with regard to namespaces. at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:409) at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:391) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:386) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:380) at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:208) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:379) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:376) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.RuntimeException: org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or change an object in a way which is incorrect wit h regard to namespaces. at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:464) at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139) at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:868) at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:206) at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:221) at org.apache.ode.bpel.engine.BpelProcess.handleWorkEvent(BpelProcess.java:393) at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:399) ... 11 more Caused by: org.w3c.dom.DOMException: NAMESPACE_ERR: An attempt is made to create or change an object in a way which is incorrect with regard to namespaces. at org.apache.xerces.dom.CoreDocumentImpl.checkDOMNSErr(Unknown Source) at org.apache.xerces.dom.ElementNSImpl.setName(Unknown Source) at org.apache.xerces.dom.ElementNSImpl.<init>(Unknown Source) at org.apache.xerces.dom.CoreDocumentImpl.createElementNS(Unknown Source) at org.apache.ode.utils.DOMUtils.cloneNode(DOMUtils.java:1155) at org.apache.ode.bpel.runtime.ASSIGN.replaceElement(ASSIGN.java:490) at org.apache.ode.bpel.runtime.ASSIGN.copy(ASSIGN.java:416) at org.apache.ode.bpel.runtime.ASSIGN.run(ASSIGN.java:81) at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451) ... 17 more WidgetWorker outputting empty image tags WidgetWorker.makeHyperlinkString is outputting empty image tags <img src = ""/> Utf8StorageConverter.java does not always produce NULLs when data is malformed It does so for scalar types but not for complext types and not for the fields inside of the complext types. This is because it uses different code to parse scalar types by themselves and scalar types inside of a complex type. It should really use the same (its own code to do so.) The code it is currently uses, is inside of TextDataParser.jjt and is also used to parse constants so we need to be careful if we want to make changes to it. MethodQL parameter passing broken OpenJPAEntityManager oem = OpenJPAPersistence.cast(em); OpenJPAQuery query = oem.createQuery("openjpa.MethodQL", "de.logentis.openjpa.LogentisMethodQL.blabla"); query.setResultClass(DP_PLZ_DA.class); query.setParameter(1, "Fred").setParameter(2, "Lucas"); This results of an empty parameter Map in the LogentisMethodQL.blabla() method. Even worse, when doing parameter passing as stated in the docs Chapter 9 / 5: query.setParameter("first", "Fred").setParameter("last", "Lucas"); There is an exception thrown. In fact MethodQL is completely broken when it comes to parameters at this point. ruby client timeout does not affect connect() the ruby library does not wrap client connect() calls in a timeout select. so a ruby thrift client could be blocked in connect() indefinitely. REST Service not being called if no parts If a REST service which uses an URI is deployed such as follows; http://test.com/service/mxml?action=getStatus The service will not be called at execution time, a workaround is to introduce a mapping; http://test.com/service/mxml?action={tempFix} and then map tempFix to getStatus with the mapper. However since it is valid to have a REST service taking no attributes. Casting a field removes its alias. Given a script like: {code} a = loader 'myfile' as (x, y); b = foreach a generate (int)x, (double)y; c = group a by x; {code} you will get an error that x is an unknown alias. The cast operator is not carrying through the alias. It should. When a shutdown is requested, stop scanning META regions immediately During shutdown of cluster, half way through quiescing servers there is a META scan in the master. The regions from servers whose leases are already canceled show up as invalid. (72.34.249.208 is hosting META) {code} 2009-01-31 10:25:42,571 INFO org.apache.hadoop.hbase.master.HMaster: Cluster shutdown requested. Starting to quiesce servers 2009-01-31 10:25:45,868 INFO org.apache.hadoop.hbase.master.ServerManager: Cancelling lease for 72.34.249.211:60020 2009-01-31 10:25:45,868 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.211:60020: MSG_REPORT_EXITING -- lease cancelled 2009-01-31 10:25:47,480 INFO org.apache.hadoop.hbase.master.ServerManager: Cancelling lease for 72.34.249.216:60020 2009-01-31 10:25:47,480 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.216:60020: MSG_REPORT_EXITING -- lease cancelled 2009-01-31 10:25:47,840 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.210:60020 quiesced 2009-01-31 10:25:47,944 INFO org.apache.hadoop.hbase.master.ServerManager: Cancelling lease for 72.34.249.215:60020 2009-01-31 10:25:47,944 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.215:60020: MSG_REPORT_EXITING -- lease cancelled 2009-01-31 10:25:48,403 INFO org.apache.hadoop.hbase.master.ServerManager: Cancelling lease for 72.34.249.213:60020 2009-01-31 10:25:48,403 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.213:60020: MSG_REPORT_EXITING -- lease cancelled 2009-01-31 10:25:49,378 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.218:60020 quiesced 2009-01-31 10:25:50,465 INFO org.apache.hadoop.hbase.master.ServerManager: Cancelling lease for 72.34.249.214:60020 2009-01-31 10:25:50,465 INFO org.apache.hadoop.hbase.master.ServerManager: Region server 72.34.249.214:60020: MSG_REPORT_EXITING -- lease cancelled 2009-01-31 10:25:59,531 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 72.34.249.218:60020} 2009-01-31 10:25:59,544 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of activitydupehash,,1229364212541 is not valid; Server '72.34.249.214:60020' unknown. 2009-01-31 10:25:59,545 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of api,,1229364235220 is not valid; Server '72.34.249.216:60020' unknown. 2009-01-31 10:25:59,552 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of apps,,1229364222879 is not valid; Server '72.34.249.215:60020' unknown. 2009-01-31 10:25:59,552 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of assigners,,1229364037757 is not valid; Server '72.34.249.214:60020' unknown. 2009-01-31 10:25:59,554 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of canoncache,,1229364041955 is not valid; Server '72.34.249.215:60020' unknown. 2009-01-31 10:25:59,555 DEBUG org.apache.hadoop.hbase.master.BaseScanner: Current assignment of chunks,,1229390225893 is not valid; Server '72.34.249.211:60020' unknown. {code} Shutdown then continues as the last servers are quiesced, but at the same time the Master expires the lease on the regionserver that was hosting META and that it just scanned. It then starts to replay the logs for that regionserver in the middle of the shutdown. {code} 2009-01-31 10:25:59,799 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scan of 512 row(s) of meta region {regionname: .META.,,1, startKey: <>, server: 72.34.249.218:60020} complete 2009-01-31 10:25:59,799 INFO org.apache.hadoop.hbase.master.BaseScanner: All 1 .META. region(s) scanned 2009-01-31 10:26:59,530 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 72.34.249.218:60020} 2009-01-31 10:26:59,720 INFO org.apache.hadoop.hbase.master.BaseScanner: RegionManager.metaScanner scan of 512 row(s) of meta region {regionname: .META.,,1, startKey: <>, server: 72.34.249.218:60020} complete 2009-01-31 10:26:59,720 INFO org.apache.hadoop.hbase.master.BaseScanner: All 1 .META. region(s) scanned 2009-01-31 10:27:40,374 INFO org.apache.hadoop.hbase.master.ServerManager: 72.34.249.218:60020 lease expired 2009-01-31 10:27:40,375 DEBUG org.apache.hadoop.hbase.master.HMaster: Processing todo: ProcessServerShutdown of 72.34.249.218:60020 2009-01-31 10:27:40,375 INFO org.apache.hadoop.hbase.master.RegionServerOperation: process shutdown of server 72.34.249.218:60020: logSplit: false, rootRescanned: false, numberOfMetaRegions: 1, onlin eMetaRegions.size(): 1 2009-01-31 10:27:40,387 INFO org.apache.hadoop.hbase.regionserver.HLog: Splitting 44 log(s) in hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020 2009-01-31 10:27:40,387 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 1 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1232996040603 2009-01-31 10:27:40,443 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Creating new log file writer for path hdfs://mb0:9000/hbase/.META./1028785192/oldlogfile.log and region .META.,,1 2009-01-31 10:27:40,575 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Creating new log file writer for path hdfs://mb0:9000/hbase/sources/671225115/oldlogfile.log and region sources,,1229364117966 2009-01-31 10:27:41,171 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Applied 100003 total edits from hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1232996040603 2009-01-31 10:27:41,173 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 2 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233093726382 2009-01-31 10:27:41,429 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Creating new log file writer for path hdfs://mb0:9000/hbase/dupehash/1607532582/oldlogfile.log and region dupehash,O<L;h,12 31779694744 2009-01-31 10:27:41,462 INFO org.apache.hadoop.hbase.master.ServerManager: 72.34.249.217:60020 lease expired 2009-01-31 10:27:41,499 INFO org.apache.hadoop.hbase.master.ServerManager: All user tables quiesced. Proceeding with shutdown 2009-01-31 10:27:41,499 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling root scanner to stop 2009-01-31 10:27:41,499 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling meta scanner to stop 2009-01-31 10:27:41,499 DEBUG org.apache.hadoop.hbase.master.RegionManager: meta and root scanners notified 2009-01-31 10:27:41,499 INFO org.apache.hadoop.hbase.master.RootScanner: RegionManager.rootScanner exiting 2009-01-31 10:27:41,499 INFO org.apache.hadoop.hbase.master.MetaScanner: RegionManager.metaScanner exiting 2009-01-31 10:27:41,780 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Applied 100001 total edits from hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233093726382 2009-01-31 10:27:41,781 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 3 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233101015153 2009-01-31 10:27:41,838 INFO org.apache.hadoop.hbase.master.ServerManager: 72.34.249.210:60020 lease expired 2009-01-31 10:27:41,866 INFO org.apache.hadoop.hbase.master.ServerManager: All user tables quiesced. Proceeding with shutdown 2009-01-31 10:27:41,866 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling root scanner to stop 2009-01-31 10:27:41,866 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling meta scanner to stop 2009-01-31 10:27:41,866 DEBUG org.apache.hadoop.hbase.master.RegionManager: meta and root scanners notified 2009-01-31 10:27:42,557 INFO org.apache.hadoop.hbase.master.ServerManager: 72.34.249.212:60020 lease expired 2009-01-31 10:27:42,581 INFO org.apache.hadoop.hbase.master.ServerManager: All user tables quiesced. Proceeding with shutdown 2009-01-31 10:27:42,581 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling root scanner to stop 2009-01-31 10:27:42,581 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling meta scanner to stop 2009-01-31 10:27:42,581 DEBUG org.apache.hadoop.hbase.master.RegionManager: meta and root scanners notified 2009-01-31 10:27:42,615 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Applied 100002 total edits from hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233101015153 2009-01-31 10:27:42,618 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 4 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233111791302 2009-01-31 10:27:43,356 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Applied 100001 total edits from hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233111791302 2009-01-31 10:27:43,359 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 5 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233122447841 2009-01-31 10:27:43,404 INFO org.apache.hadoop.hbase.master.ServerManager: All user tables quiesced. Proceeding with shutdown 2009-01-31 10:27:43,404 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling root scanner to stop 2009-01-31 10:27:43,404 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling meta scanner to stop 2009-01-31 10:27:43,404 DEBUG org.apache.hadoop.hbase.master.RegionManager: meta and root scanners notified 2009-01-31 10:27:43,991 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Applied 100001 total edits from hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233122447841 {code} During the log replay, a log file was missing from HDFS. Not sure why, there was a Datanode crash that could be related. More importantly, once it trips on the missing file it stops the replay (even though there's another 37 logs). {code} 2009-01-31 10:27:43,992 DEBUG org.apache.hadoop.hbase.regionserver.HLog: Splitting 6 of 44: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233132556827 2009-01-31 10:27:44,022 WARN org.apache.hadoop.hbase.master.HMaster: Processing pending operations: ProcessServerShutdown of 72.34.249.218:60020 java.io.FileNotFoundException: File does not exist: hdfs://mb0:9000/hbase/log_72.34.249.218_1232996040351_60020/hlog.dat.1233132556827 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:394) at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:679) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1417) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1412) at org.apache.hadoop.hbase.regionserver.HLog.splitLog(HLog.java:742) at org.apache.hadoop.hbase.regionserver.HLog.splitLog(HLog.java:705) at org.apache.hadoop.hbase.master.ProcessServerShutdown.process(ProcessServerShutdown.java:249) at org.apache.hadoop.hbase.master.HMaster.processToDoQueue(HMaster.java:427) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:360) 2009-01-31 10:27:44,022 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling root scanner to stop 2009-01-31 10:27:44,022 DEBUG org.apache.hadoop.hbase.master.RegionManager: telling meta scanner to stop 2009-01-31 10:27:44,022 DEBUG org.apache.hadoop.hbase.master.RegionManager: meta and root scanners notified 2009-01-31 10:27:44,023 DEBUG org.apache.hadoop.hbase.RegionHistorian: Offlined 2009-01-31 10:27:44,023 INFO org.apache.hadoop.hbase.master.HMaster: Stopping infoServer 2009-01-31 10:27:44,023 INFO org.mortbay.util.ThreadedServer: Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=60010] 2009-01-31 10:27:44,026 INFO org.mortbay.http.SocketListener: Stopped SocketListener on 0.0.0.0:60010 {code} nocamel style breaks generated services Generated service code currently assumes that the isSet checker for the "success" field is in camel case, when it could have been generated in underscore case. PackagedTextTemplate uses default ResourceStreamLocator first PackagedTextTemplate:155 When loading a resource, ResourceStreamLocator should give priority to the application specific locator first, and not using it as a fallback solution if the default ResourceStreamLocator fails to find the resource. Imagine an environment where a custom resource locator has been set up to get all the resources from the project source folder (very useful during development to modify markups, css' and js' on the fly through the IDE). With the current behavior I have to modify the deployed-version of my resource to make wicket apply the changes on the fly. Here's my usual Application.init() method: if( Application.DEVELOPMENT.equals( getConfigurationType() ) ){ getResourceSettings().setResourceStreamLocator( new ResourceStreamLocator( new Path( new Folder( getServletContext().getRealPath( "/" ).replaceFirst( "web/", "src/" ) ) ) ) ) } Aegis schema generation doing arrays at two levels TypeClassInfo.getMinOccurs(QName) isn't really implemented, and getMaxOccurs(QName) doesn't even exist! In an effort to work around this, I made the caller of this check directly for an element of Array type and retrieve the bounds. However, that check does not manage to mirror the logic that decides when to use an ArrayOf... type. So it ends up generating schema with maxOccurs at both levels. Oops. The fix has to be to actually make getMinOccurs work right and add the missing getMaxOccurs. The real fun here seems to be that a parameter with maxOccurs doesn't trigger an ArrayOf type correctly. Arggh. WicketSessionFilter doesn't takes into account WebApplication#getSessionAttributePrefix(WebRequest) WicketSessionFilter#init(FilterConfig), line 139 constructs the 'sessionKey' without taking into account the return value of WebApplication#getSessionAttributePrefix(WebRequest). Patch: Index: protocol/http/servlet/WicketSessionFilter.java =================================================================== --- protocol/http/servlet/WicketSessionFilter.java (revisin: 725053) +++ protocol/http/servlet/WicketSessionFilter.java (copia de trabajo) @@ -28,6 +28,7 @@ import javax.servlet.http.HttpSession; import org.apache.wicket.Session; +import org.apache.wicket.protocol.http.WebApplication; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -136,7 +137,8 @@ log.debug("filterName/ application key set to " + filterName); } - sessionKey = "wicket:" + filterName + ":" + Session.SESSION_ATTRIBUTE_NAME; + WebApplication application = WebApplication.get(filterName); + sessionKey = application.getSessionAttributePrefix(null) + Session.SESSION_ATTRIBUTE_NAME; if (log.isDebugEnabled()) { NPE thrown in replication admin page due to unhandled casting of null values Read context here: http://www.nabble.com/Replication-page-failure-:(-td22252018.html http://www.nabble.com/Trunk-Replication-Page-Issue-td22249657.html NativeFtpFile.equals does not work correctly with different casing on OS X As discussed in this thread http://markmail.org/message/s6csphyzcewxzvb2 NativeFtpFile.equals fail to detect that two files are equal if different case (e.g. "foo" and "FOO") is used on OS X. Thus, we fail to inhibit the deletion of the current directory in RMD. Stream closed after statistics is updated for data transfers We close the stream after both updating the statistics and logging a successful data transfer. Close of stream should be done as soon as the transfer is done. testCliDriver_udf7 fails The org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf7 test fails. See this url for more information: http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.19/lastBuild/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_udf7/ ScalaCheck Version Broken in Scala 2.7.2 With the release of Scala 2.7.2, the default ScalaCheck version (1.3) is no longer sufficient. This version of ScalaCheck depends upon a method which no longer exists within the Scala standard library. For the sake of Google, this is the precise error message: Exception "java.lang.NoSuchMethodError: scala.Stream$cons$.apply(Ljava/lang/Object;Lscala/Function0;)Ljava/lang/Object;" raised on argument generation. The solution is to use ScalaCheck 1.4 instead of 1.3. Just to make things even more fun, Specs 1.3.1 does not work with ScalaCheck 1.4 (only with 1.3 and earlier). This problem is fixed in the (still forthcoming) 1.3.2 release, but until then, there will apparently be no running tests under Scala 2.7.2 using Specs and ScalaCheck in conjunction. Sucks to be me... This is an absolute killer for me. I use ScalaCheck quite extensively, which means that I now have 300+ tests which now crash where before they ran fine. If we can't get a release which either increments the hard-coded version or provides a configurable option, could we at least get a patch which can be applied manually to do the same? CombinedConfiguration: java.util.NoSuchElementException after reload of enclosed SubnodeConfiguration/XMLConfiguration Steps to repeat: - create a XMLConfiguration based on a XML config file (xml file content e.g.: <config><foo><bar>0</bar></for></config>) - assign file reloading strategy to the XMLConfiguration - create a SubnodeConfiguration based on this XMLConfiguration (prefix e.g.: 'foor') - create a CombinedConfiguration - add the SubnodeConfiguration to this CombinedConfiguration - get a configuration value from the CombinedConfiguration (e.g. 'bar') -> OK, this works - touch the underlying xml configuration - try to get a configuration value from the CombinedConfiguration again (e.g. 'bar') -> java.util.NoSuchElementException See also attached TestCase. Archeypes will not output debug level logging The archetypes work great but they won't output debug level logging because they don't include a dependency on Log4J. By simply adding the dependency on Log4J, outputting debug level logging is possible. NioDatagramConnector.newHandle leaks DatagramChannels on bind exception This method does not close the DatagramChannel on exception in bind, and the reference is not recoverable by its caller. @Override protected DatagramChannel newHandle(SocketAddress localAddress) throws Exception { DatagramChannel ch = DatagramChannel.open(); if (localAddress != null) { ch.socket().bind(localAddress); } return ch; } Click Calendar should destroy popup on close The calendar hides the popup when it is closed. However when the popup is displayed again it creates a new instance. This causes a leak because the hidden instances are never removed from the DOM. The fix is to destroy the calendar when the popup closes instead of hiding it. wsdl2java -db xmlbeans can't generate right wrapped class Here is part of the SEI which is generated by wsdl2java -db xmlbeans {code} @ResponseWrapper(localName = "greetMeResponse", targetNamespace = "http://apache.org/hello_world_soap_http/types", className = "org.apache.helloWorldSoapHttp.types.GreetMeResponseDocument") @RequestWrapper(localName = "greetMe", targetNamespace = "http://apache.org/hello_world_soap_http/types", className = "org.apache.helloWorldSoapHttp.types.GreetMeDocument") @WebResult(name = "responseType", targetNamespace = "http://apache.org/hello_world_soap_http/types") @WebMethod public org.apache.xmlbeans.XmlString greetMe( @WebParam(name = "requestType", targetNamespace = "http://apache.org/hello_world_soap_http/types") org.apache.helloWorldSoapHttp.types.MyStringType requestType ); {code} The tool should mapping the XmlBeans buildin type into the natural Java class, such as org.apache.xmlbeans.XmlString to String. REST support not working in synapse 1.2 When submitting a GET request to a synapse proxy service that points to a REST webservice AND Content-Type is not text/xml or application/xml the error: "Cannot create DocumentElement without destination EPR" comes out. (e.g try to approach the service proxy with a webbrowers and it will do this) configuration is as follows: <definitions xmlns="http://ws.apache.org/ns/synapse"> <proxy name="Forwarder"> <target> <endpoint> <address uri="http://localhost:11111/MyService/echo" format="get"/> </endpoint> <outSequence> <send/> </outSequence> </target> </proxy> </definitions> If the message is send with the 'correct' content-type like for example so: curl -G -H "Content-Type: text/xml" http://localhost:8280/Forwarder/mediate Synapse gives me the following output: 2008-06-18 15:10:59,714 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [/0:0:0:0:0:0:0:1%0:56697]: Connected 2008-06-18 15:10:59,732 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [/0:0:0:0:0:0:0:1%0:56697]: GET /soap/Forwarder/mediate HTTP/1.1 2008-06-18 15:10:59,780 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG headers >> GET /soap/Forwarder/mediate HTTP/1.1 2008-06-18 15:10:59,780 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG headers >> User-Agent: curl/7.16.3 (powerpc-apple-darwin9.0) libcurl/7.16.3 OpenSSL/0.9.7l zlib/1.2.3 2008-06-18 15:10:59,780 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG headers >> Host: localhost:8280 2008-06-18 15:10:59,780 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG headers >> Accept: */* 2008-06-18 15:10:59,780 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG headers >> Content-Type:application/xml 2008-06-18 15:10:59,797 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG TransportUtils createSOAPEnvelope using Builder (class org.apache.axis2.builder.ApplicationXMLBuilder) selected from type (application/xml) 2008-06-18 15:10:59,841 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SynapseMessageReceiver Synapse received a new message for message mediation... 2008-06-18 15:10:59,841 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SynapseMessageReceiver Received To: null 2008-06-18 15:10:59,841 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SynapseMessageReceiver SOAPAction: null 2008-06-18 15:10:59,843 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SynapseMessageReceiver WSA-Action: null 2008-06-18 15:10:59,845 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG Axis2SynapseEnvironment Injecting MessageContext 2008-06-18 15:10:59,845 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG Axis2SynapseEnvironment Using Main Sequence for injected message 2008-06-18 15:10:59,845 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SequenceMediator Start : Sequence <main> 2008-06-18 15:10:59,845 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SequenceMediator Sequence <SequenceMediator> :: mediate() 2008-06-18 15:10:59,846 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG LogMediator Start : Log mediator 2008-06-18 15:10:59,846 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] INFO LogMediator To: , MessageID: urn:uuid:27AA6F4DF24676A7641213794659778, Direction: request 2008-06-18 15:10:59,846 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG LogMediator End : Log mediator 2008-06-18 15:10:59,846 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG DropMediator Start : Drop mediator 2008-06-18 15:10:59,852 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG DropMediator End : Drop mediator 2008-06-18 15:10:59,852 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG SequenceMediator End : Sequence <main> 2008-06-18 15:10:59,853 [10.0.0.12-equilibrium.local] [HttpServerWorker-1] DEBUG ServerWorker Sending 202 Accepted response for MessageID : urn:uuid:27AA6F4DF24676A7641213794659778 response written : null response will follow : true acked : false forced ack : false 2008-06-18 15:10:59,862 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [localhost/0:0:0:0:0:0:0:1%0:56697]: Output ready 2008-06-18 15:10:59,863 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [localhost/0:0:0:0:0:0:0:1%0:56697]: Content encoder [chunk-coded; completed: true] 2008-06-18 15:10:59,864 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [localhost/0:0:0:0:0:0:0:1%0:56697]: Response ready 2008-06-18 15:10:59,865 [10.0.0.12-equilibrium.local] [I/O dispatcher 5] DEBUG ServerHandler HTTP connection [closed]: Closed An invalid string "system" shows up in the "Login Timeout" field in Derby Embedded or Derby Embedded XA database pool configuration page 1. In the database pools portlet, create a database pool with the type "Derby embedded XA" or "Derby embedded". 2. In the resulting configuration page, the "Login Timeout" field will have a default value "system". This property claimed to be ignored by Derby. 3. Select JAR, specify database name, confirm password, and deploy. An error occurs: 2009-03-02 15:46:11,156 ERROR [Deployer] Deployment failed due to org.apache.geronimo.common.propertyeditor.PropertyEditorException: For input string: "system" at org.apache.geronimo.common.propertyeditor.IntegerEditor.getValue(IntegerEditor.java:34) at org.apache.geronimo.connector.deployment.ConnectorModuleBuilder.getValue(ConnectorModuleBuilder.java:817) at org.apache.geronimo.connector.deployment.ConnectorModuleBuilder.setDynamicGBeanDataAttributes(ConnectorModuleBuilder.java:782) at org.apache.geronimo.connector.deployment.ConnectorModuleBuilder.addOutboundGBeans(ConnectorModuleBuilder.java:929) at org.apache.geronimo.connector.deployment.ConnectorModuleBuilder.addConnectorGBeans(ConnectorModuleBuilder.java:597) at org.apache.geronimo.connector.deployment.ConnectorModuleBuilder.initContext(ConnectorModuleBuilder.java:524) at org.apache.geronimo.j2ee.deployment.EARConfigBuilder.buildConfiguration(EARConfigBuilder.java:595) at org.apache.geronimo.deployment.Deployer.deploy(Deployer.java:255) at org.apache.geronimo.deployment.Deployer.deploy(Deployer.java:134) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:59) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:39) at java.lang.reflect.Method.invoke(Method.java:612) at org.apache.geronimo.gbean.runtime.ReflectionMethodInvoker.invoke(ReflectionMethodInvoker.java:34) at org.apache.geronimo.gbean.runtime.GBeanOperation.invoke(GBeanOperation.java:124) at org.apache.geronimo.gbean.runtime.GBeanInstance.invoke(GBeanInstance.java:867) at org.apache.geronimo.kernel.basic.BasicKernel.invoke(BasicKernel.java:239) at org.apache.geronimo.deployment.plugin.local.AbstractDeployCommand.doDeploy(AbstractDeployCommand.java:116) at org.apache.geronimo.deployment.plugin.local.DistributeCommand.run(DistributeCommand.java:61) at java.lang.Thread.run(Thread.java:735) Caused by: java.lang.NumberFormatException: For input string: "system" at java.lang.Throwable.<init>(Throwable.java:67) at java.lang.NumberFormatException.forInputString(NumberFormatException.java:61) at java.lang.Integer.parseInt(Integer.java:460) at java.lang.Integer.valueOf(Integer.java:566) at org.apache.geronimo.common.propertyeditor.IntegerEditor.getValue(IntegerEditor.java:31) ... 19 more 2009-03-02 15:46:11,265 INFO [DatabasePoolPortlet] Deployment Failed! HADOOP-4638 has broken 0.19 compilation UtilsForTest is missing in 0.19 and {{TestRecoveryManager}} uses it. LB endpoints can retry forever. If a message to a SALoadBalanceEndpoint encounters an error, the SALoadBalanceEndpoint.onChildEndpointFail method will try to resend the message so long as another endpoint is active. If, however, the suspendDurationOnFailure for an endpoint is sufficiently short, and all endpoints in the group are failing (say, because the destination endpoints are down), then Synapse will retry the message forever. WriteFuture.isWritten() never returns true even when data is actually sent on the serial port using serial transport The serial transport never sets the WriteFuture.isWritten() to true, even when the data has been written on the serial port. The WriteFuture.awaitUninterruptibly() without any timeout never returns and if WriteFuture.awaitUninterruptibly() is used with a timeout, then it returns but specifies the WriteFuture.isWritten() as false. The following code is the basic usage of serial transport: ----------------------------------------------------------------------- SerialAddress a = new SerialAddress("COM1", 115200, SerialAddress.DataBits.DATABITS_8, SerialAddress.StopBits.BITS_1, SerialAddress.Parity.NONE, SerialAddress.FlowControl.NONE); IoConnector c = new SerialConnector(); c.setHandler(this); ConnectFuture cf = c.connect(a); cf.awaitUninterruptibly(); System.out.println("Connection = " + cf.isConnected()); if (cf.isConnected()) { IoSession s = cf.getSession(); IoBuffer b = IoBuffer.allocate(32); b.put(new String("this is a test message").getBytes()); b.flip(); WriteFuture wf = s.write(b); wf.awaitUninterruptibly(5, TimeUnit.SECONDS); System.out.println("Message Written = " + wf.isWritten()); } ----------------------------------------------------------------------- Using a cross serial cable, the serial data does reach the other end, but the WriteFuture does not say so. I think the problem may be with the file SerialSessionImpl.java after the line 185. After the buffer data has been written to the serial port's output stream and the buffer position has been adjusted, the WriteFuture in the write request is not notified. If I add the line: ----------------------------------------------------------------------- req.getFuture().setWritten(); ----------------------------------------------------------------------- right after the line # 185, it starts to work for all my examples. Thanks, Akbar. More than required data sent on serial port through serial transport The serial transport sends more then required data when IoSession.write() is called with the IoBuffer. The following code is the basic usage of serial transport: ----------------------------------------------------------------------- SerialAddress a = new SerialAddress("COM1", 115200, SerialAddress.DataBits.DATABITS_8, SerialAddress.StopBits.BITS_1, SerialAddress.Parity.NONE, SerialAddress.FlowControl.NONE); IoConnector c = new SerialConnector(); c.setHandler(this); ConnectFuture cf = c.connect(a); cf.awaitUninterruptibly(); System.out.println("Connection = " + cf.isConnected()); if (cf.isConnected()) { IoSession s = cf.getSession(); IoBuffer b = IoBuffer.allocate(32); b.put(new String("this is a test message").getBytes()); b.flip(); WriteFuture wf = s.write(b); wf.awaitUninterruptibly(5, TimeUnit.SECONDS); System.out.println("Message Written = " + wf.isWritten()); } ----------------------------------------------------------------------- The message <code>this is a test message</code> should have been sent on the serial port COM1. But the actual output received is (output captured through HDD Free Serial Port Monitor) : ----------------------------------------------------------------------- 74 68 69 73 20 69 73 20 61 20 74 65 73 74 20 6D this is a test m 65 73 73 61 67 65 00 00 00 00 00 00 00 00 00 00 essage.......... ----------------------------------------------------------------------- I have looked into the code, and the reason appears to be the following statement on line # 184 in the file SerialSessionImpl.java. ----------------------------------------------------------------------- outputStream.write(buf.array()); ----------------------------------------------------------------------- Since buf.array() returns the complete array in the IoBuffer, regardless of the actual count of valid data, so all bytes are sent. I changed this statement to: ----------------------------------------------------------------------- outputStream.write(buf.array(), buf.position(), writtenBytes); ----------------------------------------------------------------------- to ensure that only the required bytes starting from the first unread position is sent on the serial port. This works so far for all my cases. Thanks, Akbar. UNION ALL should create different destination directories for different operands The following query hangs: {code} select * from (select 1 from zshao_lazy union all select 2 from zshao_lazy) a; {code} The following query produce wrong results: (one map-reduce job overwrite/cannot overwrite the result of the other) {code} select * from (select 1 as id from zshao_lazy cluster by id union all select 2 as id from zshao_meta) a; {code} The reason of both is that the destination directory of the file sink operator conflicts with each other. Blob.getBinaryStream(long,long) is off by one for the pos+len check If you have a BLOB of length 20, and call blob.getBinaryStream(11,10), it will give you an error: java.sql.SQLException: Sum of position('11') and length('10') is greater than the size of the LOB. This is following word for word an error in the JDBC Javadoc: SQLException - if pos is less than 1 or if pos is greater than the number of bytes in the Blob or if pos + length is greater than the number of bytes in the Blob So it's checking 11 + 10 > 20, but it should check 11 + 10 > 21 (pos + len > blob.length() + 1) to allow reading the last byte. The Javadoc for Clob.getCharacterStream(long,long) has similar wording so it may have the same issue. Likewise, the client driver may have the same issue -- I haven't yet checked. Mapping references with constructor-arg type attribute As of today, references/properties for constructor-arg elements are supported only when the sca references/propertues are explicitly specified using <sca:reference/> OR <sca:property/> tags. Needs support for mapping references and properties of constructor-arg element in the absence of explicit <sca:reference/> OR <sca:property/> tags, but with the help of type attribute declared in constructor-arg element as shown below... <constructor-arg type="bigbank.account.savings.SavingsAccountService"><ref bean="savingsAccountService"/></constructor-arg> <constructor-arg type="bigbank.account.stock.StockAccountService"><ref bean="stockAccountService"/></constructor-arg> Introspecting Private Fields for Spring Pojo Beans Currently the HeuristicPojoProcessor is used to Heuristically evaluates an un-annotated Java implementation type to determine services, references, and properties according to the algorithm described in the SCA Java Client and Implementation Model Specification. Today, the HeuristicPojoProcessor is also basically used to introspect Spring Bean Class to determine services, references and properties as all the Spring Beans are Java Classes anyways. Spring Framework allows references and properties to be set as private Fields (with a public/protected setter & getter methods), whereas SCA only allows references and properties to be set as public OR protected Fields (in case of unannotated). So we need to create a SpringBeanPojoProcessor which can return the references and properties even if they are declared as private Fields (with a with a public/protected setter & getter methods). Spring SCA Reference does not get resolved in multiple context scenario The SCA References in Spring Application Context does not get resolved in case of using a multiple context scenario using <bean id="beanRefFactory" class="org.springframework.context.support.ClassPathXmlApplicationContext"> <constructor-arg> <list> <value>META-INF/sca/context-multiple/CalculatorService-context.xml</value> </list> </constructor-arg> </bean> the references inside CalculatorService-context.xml does not get resolved here. IndexWriter.addIndexes(IndexReader[] readers) doesn't correctly handle exception success flag. After this bit of code in addIndexes(IndexReader[] readers) try { flush(true, false, true); optimize(); // start with zero or 1 seg success = true; } finally { // Take care to release the write lock if we hit an // exception before starting the transaction if (!success) releaseWrite(); } The success flag should be reset to "false" because it's used again in another try/catch/finally block. TestIndexWriter.testAddIndexOnDiskFull() sometimes will hit this bug; but it's infrequent. configure.ac does not work correctly on Solaris with SunStudio Compiler The configure.ac tries to set a flag to use the stlport4 when using the SunStudio compiler. Unfortunately, neither is the "+=" construct supported by the shell, nor is the correct variable used. The following diff shows the faulty and the correct version: $ diff configure.ac~ configure.ac 148,149c148 < PLAT_CXXFLAGS="-mt -w -O5" < PLAT_LIBS+="-library=stlport4" --- > PLAT_CXXFLAGS="-mt -w -O5 -library=stlport4" [web] Producers coumpounding should work on the new writer-based method when dealing with HTML producers, otherwise will miss internal producer composition Connection fails to close if a producer or consumer has not been disposed (only when using the failover transport). When using the failover transport such as:- <defaultURI value="activemq:failover:(tcp://activemqhost:61616,tcp://activemqhost:61616)"/> A connection will fail to close if you dispose a connection before disposing a consumer or producer that is associated with the connection. The dispose call never returns because the failover transport is continually reconnecting due to a KeyNotFoundException. The KeyNotfoundException is thrown because a session no longer exists in the connection state. A number of the existing unit tests fail (actually they never return) when using the failover transport. When in debug, a DebugAssert is displayed for each error that causes the reconnect. source jar file of DIH contains no java files When making the source distribution files using `ant dist-src`, the apache-solr-dataimporthandler-src-${version}.jar which is created by Ant contains no *.java files. The CRUD showcase example misses localized form data for Double value The CRUD example contains a salary field in employee edit form, for which output is not localized. In localization environments having "," as decimal separator, this will cause a multiplication by 10 for each subsequent save request. This has to be corrected as described in http://cwiki.apache.org/confluence/display/WW/Formatting+Dates+and+Numbers I18nInterceptor dismisses browser provided locale (see XW-679) Copied from http://jira.opensymphony.com/browse/XW-679: Currently, the I18nInterceptor first looks up request parameters if a locale information is provided. If not, it looks up the session. Without respect to whether the session actually contains a locale under the lookup key, the lookup result is passed to saveLocale, which causes the locale to be set on the ActionContext. This will erase any previous setting of the ActionContext's locale. Unfortunately, this will always erase the browser locale found by the request dispatcher, which if provided will be set on ActionContext before I18nInterceptor comes into play. SolrWriter.getResourceAsString IndexOutOfBoundsException It will got excexception when the size of data-config.xml are times of 1024bytes. Maybe it should check the sz==-1 when in.read(buf) reach the EOF. {noformat} #### ORIGINAL CODE #### static String getResourceAsString(InputStream in) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(1024); byte[] buf = new byte[1024]; int sz = 0; try { while (true) { sz = in.read(buf); baos.write(buf, 0, sz); if (sz < buf.length) break; } } finally { try { in.close(); } catch (Exception e) { } } return new String(baos.toByteArray()); } {noformat} healthcheck.pm - remove hardcoded dns servers healthcheck.pm - remove hard coded dns servers from heatlhcheck's _valid_host routine camel-file : java.lang.StringIndexOutOfBoundsException: String index out of range: 1 The following spring DSL routes generates an error : <camelContext trace="true" xmlns="http://camel.apache.org/schema/osgi"> <camel:package>org.apache.camel.example.reportincident.routing</camel:package> <!-- File route --> <camel:route> <camel:from uri="file://d:/temp/data/?move=d:/temp/done/${file:name}" /> <camel:unmarshal ref="bindyDataformat" /> <camel:to uri="bean:csv" /> <camel:to uri="activemq:queue:in" /> </camel:route> <camel:route> <camel:from uri="activemq:queue:in" /> <camel:from uri="file://d:/temp/data/queue" /> </camel:route> 2009-03-09 14:23:09,968 WARN ScheduledPollConsumer - An exception occurred while polling: Endpoint[file://d:/temp/data/?move=d:/temp/done/${file:name}]: String index out of range: 1 java.lang.StringIndexOutOfBoundsException: String index out of range: 1 at java.lang.String.charAt(String.java:687) at java.util.regex.Matcher.appendReplacement(Matcher.java:703) at java.util.regex.Matcher.replaceAll(Matcher.java:813) at java.lang.String.replaceAll(String.java:2190) at org.apache.camel.component.file.GenericFile.normalizePathToProtocol(GenericFile.java:238) at org.apache.camel.component.file.GenericFile.setEndpointPath(GenericFile.java:223) at org.apache.camel.component.file.FileConsumer.asGenericFile(FileConsumer.java:103) at org.apache.camel.component.file.FileConsumer.pollDirectory(FileConsumer.java:56) at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:66) at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:66) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) SftpConsumer : GenericFileRenameProcessStrategy - String index out of bounds While trying to consume a file via SFTP and using the moveExpression to move the file into a done folder a renameException is thrown. In GenericFile the relativeFileName reads something like the following. /incoming/test/file1.txt When it tries to call this code and exception is thrown because File.separator is '\' in a windows environment. String relative = relativeFileName.substring(0, relativeFileName.lastIndexOf(File.separator)); Replication doesn't work correctly from late joining cluster nodes If cluster nodes are added after replication queues have been created and when they may have messages on them, replication from those new nodes does not behave correctly. [smartform] Form is not binding anymore using the new on request binder chukwa agent controller remove file does not work When calling chukwaClient.remove from chukwa-hadoop-*-client.jar, the remove command does not remove all reference of the adaptor from chukwa agent. Do not use spring proxies for tracking endpoints and other NMR lists Using spring proxies make tracking endpoints in particular quite tedious and error prone. It often leads to endpoints not being unregistered properly. UIMA AS Service Not Processing Stop Request Remote Uima AS Service is not processing STOP request from a client. These requests are send by a client to a remote Cas Multiplier to abort generation of child CAses from a given input CAS. This used to work, but I think got broken when we've added selectors. We use two selectors on the input queue: <property name="messageSelector" value="Command=2000 OR Command=2002"/> and <property name="messageSelector" value="Command=2001"/> The first selector accepts Process and CPC requests which are processed by one listener and the second selector is for GetMeta requests that are processed by a separate listener (thread). We need to process STOP requests by GetMeta listener. dd2Spring need to change to support addtional request type. Use the following selector on the GetMeta listener: <property name="messageSelector" value="Command=2001 OR Command=2006"/> Resolver does not clean up properly on a failed recursive attempt to resolve When the resolver is calculating the set of potential candidates for module being resolved, it uses a map to store the potential candidates associated with each module that needs to be resolved. It also uses this map to detect cycles. In the case there are no potential candidates to resolve a dependency, the attempt to populate candidates for the given module fails. However, the failed module is not removed from the candidates map. The result is in certain situations, the resolver may end up wiring to the failed module since it still has candidates in the candidate map. This typically can only happen if there are multiple dependencies on the failed module with at least one of them being optional. key attribute is not working properly We are moving some applications from using Struts 2.0.11 to Struts 2.1.6. In our applications using Struts 2.0.11 we used the label tag's key attribute to specify the name, value, label on some of our jsp view pages. So for example: <s:label key="personBean.firstName" /> would render in the jsp page as: <tr> <td class="tdLabel"><label for="personBean_firstName" class="label">Your first name:</label></td> <td ><label id="personBean_firstName">Bruce</label></td> </tr> The global-message.properties file has: personBean.firstName=Your first name and the personBean object exists on the value stack and it has a public getFirstName() method. After changing to Struts 2.1.6, using the label tag with the key attribute no longer works as it did when we used Struts 2.0.11. Now the jsp renders: <tr> <td class="tdLabel"><label for="personBean_firstName" class="label">Your first name:</label></td> <td><label id="personBean_firstName">Your first name</label></td> </tr> Instead of showing the value of the personBean.firstName, it shows the value from the global-message.properties file (Your First Name) If I use <s:property value="personBean.firstName" /> the personBean.firstName value (Bruce) does display correctly on the jsp page. DOMParserImpl repeatedly overwrites text node child of an element (rather than appending) when there are multiple text nodes in input If there are multiple text node children on an element (which, in this case given below, results from filtering elements during parsing) then the element in the DOM on output contains only the final text node: Demonstration code: ////////////////////////////////// package test; import junit.framework.TestCase; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.ls.LSParserFilter; import org.w3c.dom.traversal.NodeFilter; import com.sun.org.apache.xerces.internal.parsers.DOMParserImpl; public class TestBug extends TestCase { private static final String EXAMPLE_NS = "http://www.example.com"; public void testFilteringSiblingTextNodes() throws Exception { final DOMParserImpl parser = new DOMParserImpl("com.sun.org.apache.xerces.internal.parsers.XIncludeAwareParserConfiguration", "http://www.w3.org/2001/XMLSchema"); parser.setFilter(new LSParserFilter() { public short acceptNode(final Node nodeArg) { return LSParserFilter.FILTER_ACCEPT; } public int getWhatToShow() { return NodeFilter.SHOW_ALL; } public short startElement(final Element elementArg) { if (EXAMPLE_NS.equals(elementArg.getNamespaceURI())) { return LSParserFilter.FILTER_ACCEPT; } else { return LSParserFilter.FILTER_SKIP; } } }); final Document document = parser.parseURI(getClass().getResource("input.xml").toString()); assertEquals("List:1)Item 1,2)Item 2.", document.getElementsByTagNameNS(EXAMPLE_NS, "foo").item(0).getTextContent()); } } ///////////////////////////////// resource "input.xml" <html xmlns='http://www.w3.org/1999/xhtml' xmlns:ex='http://www.example.com'> <ex:foo>List:<br />1)Item 1,<br />2)Item 2.</ex:foo> </html> ///////////////////////////////////////// Unable to search on a custom attribute of type directory string unless I've set a matching rule I have created a custom auxiliary objectclass with one custom attribute. That attribute is of type directory string, and is used as an id. Here are the schema objects. I have replaced my OID with x.x.x.x dn: m-oid=x.x.x.x.1.16, ou=objectClasses, cn=myschema, ou=schema objectclass: metaObjectClass objectclass: metaTop objectclass: top m-oid: x.x.x.x.1.16 m-name: myObject m-typeObjectClass: AUXILIARY m-must: myId dn: m-oid=x.x.x.x.1.15, ou=attributeTypes, cn=myschema, ou=schema objectclass: metaAttributeType objectclass: metaTop objectclass: top m-oid: x.x.x.x.1.15 m-name: myId m-syntax: 1.3.6.1.4.1.1466.115.121.1.15 m-singleValue: TRUE I have populated my directory with several objects of this custom objectclass type, which contain the custom id attribute. When I attempt to search the DIT for an object with a specific value for myId, I get the following error... ERROR httpSSLWorkerThread-8081-0 localhost - NamingException: [LDAP: error code 80 - OTHER: failed for SearchRequest baseDn : '2.5.4.11=myorg,0.9.2342.19200300.100.1.25=mydomain,0.9.2342.19200300.100.1.25=com' filter : '(x.x.x.x.1.15=j9cinXz40mEpCI6cWxZJ70ETV:[9223372036854775807])' scope : whole subtree typesOnly : false Size Limit : no limit Time Limit : no limit Deref Aliases : deref Always attributes : : java.lang.String cannot be cast to [B] If I modify the custom attribute to have a matching rule of "m-equality: caseExactMatch" then the error seems to go away. Avoid deadlock when shutting down a SA while receiving a sync exchange for it When a sync MessageExchange has been sent by a component and the platform starts shutting down, you can run into a deadlock: This thread is waiting for all sync exchanges to finish and is holding on to the ServiceAssemblyImpl (synchronized method) {noformat} "Timer-3" daemon prio=1 tid=0x00007fc210211910 nid=0x5706 in Object.wait() [0x0000000041778000..0x0000000041779d00] at java.lang.Object.wait(Native Method) - waiting on <0x00007fc2235490d0> (a java.lang.Object) at java.lang.Object.wait(Object.java:474) at org.springframework.jms.listener.DefaultMessageListenerContainer.doShutdown(DefaultMessageListenerContainer.java:489) - locked <0x00007fc2235490d0> (a java.lang.Object) at org.springframework.jms.listener.AbstractJmsListeningContainer.shutdown(AbstractJmsListeningContainer.java:211) at org.apache.servicemix.jms.endpoints.JmsConsumerEndpoint.deactivate(JmsConsumerEndpoint.java:523) - locked <0x00007fc223548820> (a org.apache.servicemix.jms.endpoints.JmsConsumerEndpoint) at org.apache.servicemix.common.DefaultServiceUnit.shutDown(DefaultServiceUnit.java:126) - locked <0x00007fc2235489a8> (a org.apache.servicemix.common.xbean.XBeanServiceUnit) at org.apache.servicemix.common.xbean.XBeanServiceUnit.shutDown(XBeanServiceUnit.java:42) at org.apache.servicemix.common.BaseServiceUnitManager.shutDown(BaseServiceUnitManager.java:221) - locked <0x00007fc222b4c2d8> (a org.apache.servicemix.common.BaseServiceUnitManager) at org.apache.servicemix.jbi.deployer.artifacts.ServiceUnitImpl.shutdown(ServiceUnitImpl.java:145) at org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl.changeState(ServiceAssemblyImpl.java:282) at org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl.transition(ServiceAssemblyImpl.java:252) at org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl.shutDown(ServiceAssemblyImpl.java:220) - locked <0x00007fc2233fd348> (a org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl) at org.apache.servicemix.jbi.deployer.impl.Deployer.lifeCycleChanged(Deployer.java:576) at org.apache.servicemix.jbi.deployer.artifacts.AbstractLifecycleJbiArtifact.fireEvent(AbstractLifecycleJbiArtifact.java:102) at org.apache.servicemix.jbi.deployer.artifacts.ComponentImpl.shutDown(ComponentImpl.java:174) at org.apache.servicemix.jbi.deployer.impl.Deployer.unregisterComponent(Deployer.java:454) at org.apache.servicemix.jbi.deployer.impl.Deployer.unregisterDeployedComponent(Deployer.java:646) at org.apache.servicemix.jbi.deployer.impl.Deployer$1.removedService(Deployer.java:237) at org.osgi.util.tracker.ServiceTracker$Tracked.untrack(ServiceTracker.java:1126) at org.osgi.util.tracker.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:957) at org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:820) at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:678) at org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:609) at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:3379) at org.apache.felix.framework.Felix.access$000(Felix.java:39) at org.apache.felix.framework.Felix$1.serviceChanged(Felix.java:620) at org.apache.felix.framework.ServiceRegistry.fireServiceChanged(ServiceRegistry.java:571) at org.apache.felix.framework.ServiceRegistry.unregisterService(ServiceRegistry.java:105) at org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:120) at org.springframework.osgi.service.exporter.support.internal.support.ServiceRegistrationDecorator.unregister(ServiceRegistrationDecorator.java:65) at org.springframework.osgi.util.OsgiServiceUtils.unregisterService(OsgiServiceUtils.java:41) at org.springframework.osgi.service.exporter.support.OsgiServiceFactoryBean.unregisterService(OsgiServiceFactoryBean.java:370) at org.springframework.osgi.service.exporter.support.OsgiServiceFactoryBean.unregisterService(OsgiServiceFactoryBean.java:360) at org.springframework.osgi.service.exporter.support.AbstractOsgiServiceExporter.destroy(AbstractOsgiServiceExporter.java:84) at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:151) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:487) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:462) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:430) - locked <0x00007fc222b74f78> (a java.util.LinkedHashMap) at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:853) at org.springframework.osgi.context.support.AbstractOsgiBundleApplicationContext.destroyBeans(AbstractOsgiBundleApplicationContext.java:213) at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:831) at org.springframework.osgi.context.support.AbstractOsgiBundleApplicationContext.doClose(AbstractOsgiBundleApplicationContext.java:206) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$501(AbstractDelegatedExecutionApplicationContext.java:68) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$2.run(AbstractDelegatedExecutionApplicationContext.java:215) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.normalClose(AbstractDelegatedExecutionApplicationContext.java:211) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor.close(DependencyWaiterApplicationContextExecutor.java:345) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.doClose(AbstractDelegatedExecutionApplicationContext.java:226) at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:794) - locked <0x00007fc222b65af8> (a java.lang.Object) at org.springframework.osgi.extender.internal.activator.ContextLoaderListener$1.run(ContextLoaderListener.java:552) at org.springframework.osgi.extender.internal.util.concurrent.RunnableTimedExecution$MonitoredRunnable.run(RunnableTimedExecution.java:60) at org.springframework.scheduling.timer.DelegatingTimerTask.run(DelegatingTimerTask.java:66) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) {noformat} ... while the AssemblyReferencesListener is blocking completion of the same MessageExchange while the ServiceAssemblyImpl lock can not be acquired {noformat} "DefaultMessageListenerContainer-8" prio=1 tid=0x00007f9044bfbb90 nid=0x6082 waiting for monitor entry [0x00000000458c1000..0x00000000458c2e00] at org.apache.servicemix.jbi.deployer.artifacts.AssemblyReferencesListener.unreference(AssemblyReferencesListener.java:164) - waiting to lock <0x00007f9057e05530> (a org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl) at org.apache.servicemix.jbi.deployer.artifacts.AssemblyReferencesListener.unreference(AssemblyReferencesListener.java:145) at org.apache.servicemix.jbi.deployer.artifacts.AssemblyReferencesListener.exchangeFailed(AssemblyReferencesListener.java:118) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.springframework.osgi.service.importer.support.internal.aop.ServiceInvoker.doInvoke(ServiceInvoker.java:64) at org.springframework.osgi.service.importer.support.internal.aop.ServiceInvoker.invoke(ServiceInvoker.java:78) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.osgi.service.util.internal.aop.ServiceTCCLInterceptor.invokeUnprivileged(ServiceTCCLInterceptor.java:57) at org.springframework.osgi.service.util.internal.aop.ServiceTCCLInterceptor.invoke(ServiceTCCLInterceptor.java:40) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.osgi.service.importer.support.LocalBundleContextAdvice.invoke(LocalBundleContextAdvice.java:59) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131) at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy24.exchangeFailed(Unknown Source) at org.apache.servicemix.nmr.core.ChannelImpl.sendSync(ChannelImpl.java:144) at org.apache.servicemix.nmr.core.ChannelImpl.sendSync(ChannelImpl.java:117) at org.apache.servicemix.jbi.runtime.impl.DeliveryChannelImpl.sendSync(DeliveryChannelImpl.java:187) at org.apache.servicemix.common.EndpointDeliveryChannel.sendSync(EndpointDeliveryChannel.java:115) at org.apache.servicemix.common.endpoints.SimpleEndpoint.sendSync(SimpleEndpoint.java:74) at org.apache.servicemix.jms.endpoints.AbstractConsumerEndpoint.onMessage(AbstractConsumerEndpoint.java:548) at org.apache.servicemix.jms.endpoints.JmsConsumerEndpoint$1.onMessage(JmsConsumerEndpoint.java:505) at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:518) at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:479) at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:451) at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:323) at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:241) at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:982) at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:974) at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:876) at java.lang.Thread.run(Thread.java:595) {noformat} Exception thrown in ServiceTracker at shutdown ERROR: EventDispatcher: Error during dispatch. (java.lang.IllegalStateException: Invalid BundleContext.) java.lang.IllegalStateException: Invalid BundleContext. at org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:393) at org.apache.felix.framework.BundleContextImpl.ungetService(BundleContextImpl.java:362) at org.osgi.util.tracker.ServiceTracker.removedService(ServiceTracker.java:429) at org.osgi.util.tracker.ServiceTracker$Tracked.untrack(ServiceTracker.java:1126) at org.osgi.util.tracker.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:957) at org.apache.felix.framework.util.EventDispatcher$4.run(EventDispatcher.java:812) at java.security.AccessController.doPrivileged(Native Method) at org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:809) at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:678) at org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:609) at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:3379) at org.apache.felix.framework.Felix.access$000(Felix.java:39) at org.apache.felix.framework.Felix$1.serviceChanged(Felix.java:620) at org.apache.felix.framework.ServiceRegistry.fireServiceChanged(ServiceRegistry.java:571) at org.apache.felix.framework.ServiceRegistry.unregisterService(ServiceRegistry.java:105) at org.apache.felix.framework.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:120) at org.apache.felix.framework.ServiceRegistry.unregisterServices(ServiceRegistry.java:146) at org.apache.felix.framework.Felix.stopBundle(Felix.java:1815) at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:999) at org.apache.felix.framework.StartLevelImpl.run(StartLevelImpl.java:263) at java.lang.Thread.run(Thread.java:613) TestGlobalFilter.testServletFilter fails {noformat} junit.framework.AssertionFailedError: expected:<14> but was:<15> at org.apache.hadoop.http.TestGlobalFilter.testServletFilter(TestGlobalFilter.java:150) {noformat} For more details, see http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/666/ . Use of gethostbyname() in Socket.cpp isn't thread safe Using gethostbyname() like this in multiple threads at once (for instance connecting simultaneously) may cause client/broker crashes The best fix is to replace the old API used with the re-entrant and more featureful getaddrinfo() API Unit test fails out on trunk org.apache.hadoop.http.TestServletFilter.testServletFilter From: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/760/ Regression org.apache.hadoop.http.TestServletFilter.testServletFilter Failing for the past 1 build (Since #760 ) Took 1 min 10 sec. Error Message url[4]=/static/hadoop-logo.jpg expected:<8> but was:<9> Pig should display a better error message when backend error messages cannot be parsed If the backend error message cannot be parsed correctly, Pig displays an error message indicating that there was an internal error. In addition, the original error message is lost. Pig should display a better error message and display the relevant part of the backend error message. bbp example cannot be run. FileAlreadyExistsException: Output directory already exists. Add "getComposeStack" method to ServletUtil The "getComposeStack" in ServletUtil class should be added: it is useful for servlet-based template engines, like Velocity Tools. In default configuration not possible to connect via LDAP and LDAPS from computers other than the one the server has been started from If I build ApacheDS from the trunk and create noarch installers, I am able to deploy it and start the server with default server.xml. But it is not possible to connect from computers other than the one the server is started from (localhost). Emmanuel assumes the following (cite from dev mailing list): "I have to test it, it seems that MINA 2.0 behaves differently than MINA 1.0. By default, when setting the Acceptor without any parameter, it uses the localHost. Not what you want, probably. You can change the address in the TcpTransport configuration for the LdapService : <tcpTransport> <tcpTransport address="<your server address>" port="10389" nbThreads="8" backLog="50"/> </tcpTransport> Can you give it a try ?" This workaround works fine. It would be nice if the server.xml has not to be adjusted (as it was before). Can't start App Client after stop it from console 1.Deploy an application client, and stop its server module, but after that there is just an "uninstall"button left, can't re-start it. When using wsdl2java with the -db xmlbeans flag, the generated build.xml doesn't work OOTB When using wsdl2java with the -db xmlbeans flag, the generated build.xml is missing a classpath entry for xmlbeans schema files. To reproduce, in a clean directory run: >wsdl2java -db xmlbeans -all <wsdl> I had to add the following to my build.xml to pick up the generated schema files: <property name="home.dir" location ="."/> .... <path id="cxf.classpath"> <pathelement location="${build.classes.dir}"/> <pathelement location="${cxf-manifest.jar.file}"/> <pathelement location="${home.dir}" /> </path> Open up FinderCacheImpl for non-JDBC or JDBC-like implementation of preparing statement/query execution The comparison used with the Reader to evaluate the end of text in the class ClobTransformer must be '>' instead of '!=' The comparison in the class ClobTransformer, method 'private String readFromClob(Clob clob)' (I think) must be changed for '>' due to that the API of java.io.Reader indicates that the reader will return -1 when there is no more data available at current Reader. Original: try { while ((len = reader.read(buf)) != 0) { sb.append(buf, 0, len); } } catch (IOException e) { Must be: try { while ((len = reader.read(buf)) > 0) { sb.append(buf, 0, len); } } catch (IOException e) { NOTE: Sorry for my english but i don't use it frequently.... TestHdfsProxy fails on 0.20 TestHdfsProxy fails with the following exception: {noformat} 09/03/06 18:28:05 ERROR mortbay.log: EXCEPTION java.lang.NullPointerException at org.mortbay.jetty.security.SslSocketConnector.createFactory(SslSocketConnector.java:215) at org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:423) at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73) at org.apache.hadoop.http.HttpServer.start(HttpServer.java:420) at org.apache.hadoop.hdfsproxy.HdfsProxy.start(HdfsProxy.java:96) at org.apache.hadoop.hdfsproxy.TestHdfsProxy.testHdfsProxyInterface(TestHdfsProxy.java:232) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:154) at junit.framework.TestCase.runBare(TestCase.java:127) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) {noformat} Subcollection plugin doesn't work with default subcollections.xml file Subcollection plugin cann't parse his configuration file because it contatins top level comment (ASF notice) and DomUtil doesn't carry about of top-level comments A full stack trace is displayed when NPE occurs in the ManagementEndpointRegistry when endpoints are not registered camel-mina - UDP protocol could have an issue if used in same camel context as both client and server sending to localhost A mina bytebuffer could be shared in a mina session. It should not be. See nabble, that could lead to that problem the end user has: http://www.nabble.com/Camel-1.6-2.0-MINA-UDP-issue-td22426433s22882.html JobTracker crashes during recovery if job files are garbled Jobtracker crashed in the recovery stage for a job with 0 byte job.xml. Ideally one would expect the jobtracker to try and recover as many jobs as possible. Management's 'messages matched' count incorrect for headers exchange As reported on user list the messages routed counter for bindings from a headers exchange is non-zero even where no messages have been enqueued: qpid: show 8386 Object of type org.apache.qpid.broker:binding: (last sample time: 13:55:35) Type Element 8386 =================================================================================== property exchangeRef 110 property queueRef 4338 property bindingKey property arguments {u'SPECIES': 'DOG23', u'TYPE': 'ANIMAL', u'x-match': 'all'} property origin <NULL> statistic msgMatched 4193496 qpid: show 4338 Object of type org.apache.qpid.broker:queue: (last sample time: 00:20:06) Type Element 4338 ============================================================================================ property vhostRef 103 property name pyclient-feeds-queuec9a401f7-413c-ab48-b955-5ca55bcdd7c6 property durable False property autoDelete False property exclusive True property arguments {} statistic msgTotalEnqueues 0 messages statistic msgTotalDequeues 0 statistic msgTxnEnqueues 0 statistic msgTxnDequeues 0 statistic msgPersistEnqueues 0 statistic msgPersistDequeues 0 statistic msgDepth 0 statistic byteDepth 0 octets statistic byteTotalEnqueues 0 statistic byteTotalDequeues 0 statistic byteTxnEnqueues 0 statistic byteTxnDequeues 0 statistic bytePersistEnqueues 0 statistic bytePersistDequeues 0 statistic consumerCount 0 consumers statistic consumerCountHigh 0 statistic consumerCountLow 0 statistic bindingCount 2749 bindings statistic bindingCountHigh 2749 statistic bindingCountLow 2749 statistic unackedMessages 0 messages statistic unackedMessagesHigh 0 statistic unackedMessagesLow 0 statistic messageLatencySamples 0 statistic messageLatencyMin 0 statistic messageLatencyMax 0 statistic messageLatencyAverage 0 LIne 118 of HeadersExchange.cpp shows that the count is incremented regardless of the success of match test. Digest auth is broken When trying to connect to virtualearth webservice using cxf I found some issues in the cxf-rt-transports-http artifact regarding digest authentication 1) "authSupplier" configuration option is missing in org.apache.cxf.transport .http.spring.HttpConduitBeanDefinitionParser#mapSpecificElements, so it's not possible to configure a DigestAuthSupplier via cxf.xml. 2) In org.apache.cxf.transport.http.DigestAuthSupplier the method getPassword returns the username and vice versa. 3) In org.apache.cxf.transport.http.DigestAuthSupplier the 'opaque' field is always send to server even it was NULL, which results in 'opaque="null"'. RFC 2069 says: opaque A string of data, specified by the server, which should be returned by the client unchanged. It is recommended that this string be base64 or hexadecimal data. This field is a "quoted-string" as specified in section 2.2 of the HTTP/1.1 specification [2]. So I think the correct handling is to skip the opaque field, when no opaque-field was sent by the server. 4) After a while the nonce may become stale, so a new digest has to be created. To achieve that, every request against an digest authenticated server needs to be cached and chunking has to be disabled to replay the request whith recalculated digest. 5) org.apache.cxf.transport.http.HTTPConduit#setHeadersByAuthorizationPolicy: If an authSupplier is present and a authString was generated, the method should return even when the authString is NULL, instead of creating a basic auth authorization header. I included patches, which allow me to connect against virtualearth token service. The wsdl can be found here: https://staging.common.virtualearth.net/find-30/common.asmx?WSDL, but you have to be authenticated to get it. FormTag does not have onreset property The FormTag class does not have an onreset property, but Form and the Form freemarker files assume it is there. If you attempt to set the onreset attribute on the s:form tag, you will get a freemarker error. Default operation selection does not implement third rule which is supposed to look for an operation name in the root element of the XML payload Default operation selection does not implement third rule which is supposed to look for an operation name in the root element of the XML payload The Oasis spec adds a rule to default operation selection as follows: 290 Otherwise, if the message is a JMS text or bytes message containing XML, then the selected 291 operation name is taken from the local name of the root element of the XML payload. Problem with DOSGi using many remote services This problem was reported by Erwan Daubert (edaubert@irisa.fr) via email: Hello, I'm trying to understand how can I do distributed OSGi with felix and CXF (DOSGi R4.2). I use the file OSGI-INF/remote-service/remote-services.xml to define my remote services as explain into the Greeter demo. When I define only one remote service, all is good. But when I define many remote services into this file, there is only one service which is defined at runtime. So I would like understand how define many remote services. I'm trying also many files where everyone define one remote service but only one file is used and only one remote service is defined. I don't have any error just when I start felix, only one service is remotely defined. Have you tried this kind of use ? Can you give me a piece of code or a example how to use many remote services ? Thanks in advance Erwan Daubert ----------------------------------------- Hi Erwan, This should definitely work, if not its a bug that needs to be fixed! Before diving into it one question: do all your remote services implement the same interface or do they implement different interfaces? Cheers, David ----------------------------------------- My remote services are the same interface. Thanks for your time Erwan DOSGi is not able to consume multiple instances of a service via Discovery A consumer wants to consume all instances of remote services implementing a certain interface via discovery. In the current implementation it only gets one of them. To reproduce you need 3 VMs running DOSGi + Discovery. One VM consumes all instances of org.acme.FooService. the two other VMs both create an instance of org.acme.FooService and register it with Discovery. Currently the consumer only gets a ServiceTracker callback for one of the two services. Comment nodes are not handled correctly during transformation When using Transformer API to build a SOAP message from DOM with comment nodes can create an invalid DOM document: Input: <root> <a> <!-- this is a test with a comment node --> </a> </root> Final (invalid) doc: <root> <a> <!--<!-- this is a test with a comment node -->--> </a> </root> [all] Names like "header" or "session" are not valid names anymore now that on demand binding is in place StringResourceModel's Localizer cannot be overwritten According to the Javadoc setLocalizer() should be used to overwrite a StringResourceModel's Localizer. However, the localizer property is overwritten later by the load() method. Overriding getLocalizer() does not work either, since getString() uses the localizer property directly. I think the load method should check whether the property is set already before applying the application's localizer. Refactor the InternalReference to allow references to be serializable Allow configurable shutdown timeout that ensures that a SA can be stopped by canceling sync requests Some SU can not be shutdown as long as there are pending sendSync exchanges (e.g. a JMS DefaultMessageListenerContainer will wait for all threads to complete). While this is a good default behavior, if you have long-running sync exchanges, you might want to be able to forcibly shutdown the SA by canceling all the sync exchanges. WicketTester Cookie handling While trying to test my SecureForm implementation (https://issues.apache.org/jira/browse/WICKET-1885) with WicketTester I ran into this issue: A cookie set in the response never shows up in the "next" request, because both have their own lists of cookies that aren't shared. Afaik both should share the same List instance to handle cookies. That way its possible to set a cookie in the response and read it from the request. A simple testcase is attached. conf files with templates should be ignored, not tracked, by SVN Files with conf/*.template file should not be tracked by SVN. They should be ignored instead. These files were incorrectly added to SVN as part of [HADOOP 4792|http://issues.apache.org/jira/browse/HADOOP-4792]. These files include: *chukwa-env.sh *chukwa-agent-conf.xml *chukwa-collector-conf.xml *collectors *chukwa-demux-conf.xml (a corresponding chukwa-demux.xml.template should be created for this file) Remove deprecated configuration files from chukwa/conf directory There are deprecated configuration files left over in Chukwa's configuration directory. The safe one to delete are: - fields.spec - joblog.properties - torque.properties - util.properties Kernel 1.1.0 release canidate has 1.1.0-SNAPSHOT dependency in smx4web demo Kernel 1.1.0 release canidate has 1.1.0-SNAPSHOT dependency in smx4web demo. See: https://svn.apache.org/repos/asf/servicemix/smx4/kernel/tags/kernel-1.1.0/demos/smx4web/pom.xml <properties> <jetty.port>8080</jetty.port> <jetty.version>6.1.12rc1</jetty.version> <servicemix.kernel.version>1.1.0-SNAPSHOT</servicemix.kernel.version> <geronimo.servlet.version>1.1.2</geronimo.servlet.version> </properties> Applications classloader should be set during method invocations. Invocation of a JAX-RS method may depend on some application classes which may not be loaded already, they will be loaded during invocation. But during invocation the classloader will be CXF war application classloader, this will lead to NoClassDefFoundErrors. To avoid this root resource class' classloader should be set on the current thread. Similar fix should be applied to provider methods such as readFrom, writeTo etc. Don't reopen file if already open when updating readers underneath scanners Doing this is costly in scenario where where many scanners and many concurrent updates. Wrong description of " hadoop fs -test " in FS Shell guide . Hadoop FS Shell Guide documentation for -test command option -d current description is. " -d check return 1 if the path is directory else return 0. " Where as it should be . " -d check to see if the path is Directory . Return 0 if true " Hive: we should be able to specify a column without a table/alias name "SELECT field1, field2 from table1" should work, just as "SELECT table1.field1, table1.field2 from table1" For join, the situation will be a bit more complicated. If the 2 join operands have columns of the same name, then we should output an "ambiguity" error. Replication throughput is limited by that of a single federation link Allowing the work to be divided between multiple links would offer one means of improving this. MyFaces-API issue: getValue of UIInput Issue was seen and fixed in Myfaces 1.1.6 already- but seems to still exist in 1.2.6 UIOutput currently has the following code: public Object getValue() { if (_value != null) return _value; ValueBinding vb = getValueBinding("value"); return vb != null ? (Object)vb.getValue(getFacesContext()) : null; } UIInput has the following code: public void setValue(Object value) { setLocalValueSet(true); super.setValue(value); } My problem (pseudo code): 1) user enters an empty string in an input-component: "" 2) conversion and validation phase: "" --> setValue(null); isLocalValueSet = true; setSubmittedValue(null); 3) validation fails in some component on the page --> update model phase is skipped 4) renderer calls getValue(); --> getValue() evaluates the value-binding, as the local-value is 'null', and I get the default-value of the bean shown again proposed solution: UIInput overwrites getValue of UIOutput: public Object getValue() { if (isLocalValueSet()) return _value; ValueBinding vb = getValueBinding("value"); return vb != null ? (Object)vb.getValue(getFacesContext()) : null; } pig should look for and use the pig specific 'pig-cluster-hadoop-site.xml' in the non HOD case just like it does in the HOD case Currently users can create a pig-cluster-hadoop-site.xml with pig specific overrides for hadoop properties for use on the cluster. This file is searched for in the classpath and used in the HOD case but not in the non HOD case. We should also do the same in the non HOD case. URL to geocoder is incorrect The geocoder has been re-factored and the new URL to access it is 'http://<host:port>/geocoder/geocode' The rails app in model/geolocation.rb is still using 'http://localhost:8080/Web20Emulator/geocode' Add missing dependencies to assembly pom Some newer dependencies are missing from the dependency list of the NMR assembly project. On some occassions I have noticed the assembly building before these modules so it is possible to include outdated versions or have a failed build if using a clean maven repo. javadoc warning: can't find restoreFailedStorage() in ClientProtocol ant javadoc-dev {noformat} [javadoc] /home/tsz/hadoop/latest/src/hdfs/org/apache/hadoop/hdfs/DistributedFileSystem.java:399: warning - Tag @see: can't find restoreFailedStorage() in org.apache.hadoop.hdfs.protocol.ClientProtocol [javadoc] /home/tsz/hadoop/latest/src/hdfs/org/apache/hadoop/hdfs/tools/DFSAdmin.java:412: warning - Tag @see: can't find restoreFailedStorage() in org.apache.hadoop.hdfs.protocol.ClientProtocol {noformat} Remove Chukwa from .gitignore Since Chukwa has moved to a subproject, its entries in .gitignore can be removed. Getting Messages for a particular user via twitter api returns a 404 Accessing user messages via twitter api doesn't work correctly. For example. /api/statuses/user_timeline/esjewett.xml? HTTP/1.1" 404 Camel FTP - move and delete should happen after processing Only applies for 1.x as 2.0 have a totally redone FTP+File component. Have discovered that the camel-ftp component deletes/moves the file before processing; so, if processing fails, then the file is not available for redelivery. ArithmeticException when last button is pressed in empty table Problem is in AbstractHtmlDataScroller.java, line 307 / 308 int rows = uiData.getRows(); int delta = rowcount % rows; Rows obviously can be 0, so the code should read: int rows = uiData.getRows(); int delta = rows != 0 ? rowcount % rows : 0; The @PathParam change from '+' to space. When I send a PathParam include '+', the '+' is changed to space. In version 2.1.1, the parameter is '+' unchanged. System view export truncates carriage return If a string contains a carriage return (\r), this character was truncated on some platforms. Importing strings with special characters fails Both Session.importXML and Workspace.importXML don't work correctly in some cases. Importing very large foreign language (for example, Chinese) text property values could result in incorrect values on some platforms. The reason is, BufferedStringValue (buffers very large string to a temporary file) uses the platform default encoding to read and write the text. BufferedStringValue is relatively slow on some systems when importing large texts or binary data because of using FD().sync(). If an exported string value contains a carriage return (\r), this character was truncated on some platforms. If an exported string value contains a characters with code below 32 excluding newline (\n) and tab (\t) - for example form feed (\f) - the imported string value was base64 encoded. Wrong TCCL is used when operating service units On JBI service unit startup not target component ClassLoader assigns as Thread Context ClassLoader but jbi.deployer bundle classloader. When going through the TransactionManager, the TCCL is changed to the transaction manager classloader File Install treats configuration files with identical subnames as the same configuration Suppose you have two ManagedServiceFactory instances registered with pids 'com.acme.abc' and 'com.acme.xyz' To configure a default instance for each of the factories I created two .cfg files to be handled by File Install: com.acme.xyz-default.cfg and com.acme.abc-default.cfg What seems to happen is that File Install creates a first configuration (correctly) and then updates that configuration to the contents of the second configuration file. Apparently because the subname is the same. (filename ::= <pid> ( '-' <subname> )? '.cfg') Possible fix: File Install (class DirectoryWatcher) uses property _alias_factory_pid to identify and retrieve the configurations it manages. The value for this property is currently set to <subname>. A possible fix could be to use <pid>-<subname> (or even <filename>) as identification value. spi2davex: some value factory tests from the SPI test suite are failing Fix the sort function on the DB Browser A title tip and mouse icon does not appear when hovering over the column name. In fact you don't even know that you can sort the column until you look at the template code. Perhaps this is intentional :) Can not patch the file database-view.vm file because svn reports this to be a binary file. I changed line 37 to, <div align="center"><a href="#" title="Sort $column" onClick="window.location.href='$columnLink';return false;" style="color: white;">$column</a></div> [web] compounds are sometimes rendered twice It happens on producers that calls super.produce() .. in that case it sees two calls to produce, and so renders the compounded stuff twice. There are errors during zip file extracted on Linux OS using farm clustering Can't deploy a war to farm clustering successfully, because the zip file extracted on Linux OS is wrong. After deploying a war to farm clustering(ex, NODE-A,NODE-B), I found that the files are not in the correct path, while named in the form like "WEB-INF\classes\...... .java" in cluster-repository on Linux OS. I have tried that on RHEL 5.2, SLES 10 OS. Same error occurs. I think this is a bug, Please check it. My steps: 1.Login on NODE-A server 1.1 For var\config\config-substitutions.properties {code:xml} clusterNodeName=NODE --> clusterNodeName=NODE-A RemoteDeployHostname=MachineA_IP {code} 1.2 For var\config\config.xml, add the following contents to module: org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car {code:xml} <gbean name="org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car?ServiceModule=org.apache.geronimo.configs/farming/2 .1.4/car,j2eeType=NodeInfo,name=NodeInfoB" gbeanInfo="org.apache.geronimo.farm.config.BasicNodeInfo"> <attribute name="name">NODE-B</attribute> <attribute propertyEditor="org.apache.geronimo.farm.config.BasicExtendedJMXConnectorInfoEditor" name="extendedJMXConnectorInfo"> <ns:javabean class="org.apache.geronimo.farm.config.BasicExtendedJMXConnectorInfo" xmlns:ns4="http://geronimo.apache.org/xml/ns/attributes-1.2" xmlns:ns="http://geronimo.apache.org/xml/ns/deployment/javabean-1.0" xmlns=""> <ns:property name="username">system</ns:property> <ns:property name="password">manager</ns:property> <ns:property name="protocol">rmi</ns:property> <ns:property name="host">MachineB_IP</ns:property> <ns:property name="port">1099</ns:property> <ns:property name="urlPath">JMXConnector</ns:property> <ns:property name="local">false</ns:property> </ns:javabean></attribute> </gbean> {code} 2.Login on NODE-B server 2.1 For var\config\config-substitutions.properties {code:xml} clusterNodeName=NODE --> clusterNodeName=NODE-B RemoteDeployHostname=MachineB_IP {code} 2.2 For var\config\config.xml, add the following contents to module: org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car {code:xml} <gbean name="org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car?ServiceModule=org.apache.geronimo.configs/far ming/2.1.4-SNAPSHOT/car,j2eeType=NodeInfo,name=NodeInfoA" gbeanInfo="org.apache.geronimo.farm.config.BasicNodeInfo"> <attribute name="name">NODE-A</attribute> <attribute propertyEditor="org.apache.geronimo.farm.config.BasicExtendedJMXConnectorInfoEditor" name="extendedJMXConnectorInfo"> <ns:javabean class="org.apache.geronimo.farm.config.BasicExtendedJMXConnectorInfo" xmlns:ns4="http://geronimo.apache.org/xml/ns/attributes-1.2" xmlns:ns="http://geronimo.apache.org/xml/ns/deployment/javabean-1.0" xmlns=""> <ns:property name="username">system</ns:property> <ns:property name="password">manager</ns:property> <ns:property name="protocol">rmi</ns:property> <ns:property name="host">MachineA_IP</ns:property> <ns:property name="port">1099</ns:property> <ns:property name="urlPath">JMXConnector</ns:property> <ns:property name="local">false</ns:property> </ns:javabean></attribute> </gbean> {code} 3. Start NODE- A server , and NODE-B server 4.Run the following commands in GERONIMO_HOME\bin in Machine A {code:xml} 4.1 deploy.bat/sh --user system --password manager start org.apache.geronimo.configs/farming//car 4.2 deploy.bat/sh --user system --password manager --host MachineB_IP start org.apache.geronimo.configs/farming//car {code} 5.deploy.bat/sh --user system --password manager deploy --targets {code:xml} org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car?ServiceModule=org.apache.geronimo.configs/farming/2.1.4-SNAPSHOT/car,j2eeType=Confi gurationStore,name=MasterConfigurationStore SAMPLE_HOME\applications\tomcat-cluster\servlet-examples-cluster-server1.war servlet-examples-cluster-plan.xml {code} Deadlock between ManagementBroker::userLock and LinkRegistry::lock Dispatching a management method holds the userLock in ManagementBroker; it may then require the lock in the LinkRegistry if it is to declare a link or bridge. Linkregistry::notifyConnection() firsttakes the Linkregistry lock, then as part of establishing the connection it raises a management event which requires the managementBrokers userLock. Thread 5 (Thread -1208362080 (LWP 28312)): #0 0x45b8b410 in __kernel_vsyscall () #1 0x45f0b97e in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #2 0x45f08247 in _L_mutex_lock_340 () from /lib/libpthread.so.0 #3 0x45f0819f in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x45c7dd29 in pthread_mutex_lock () from /lib/libc.so.6 #5 0x00edf33d in qpid::management::ManagementBroker::periodicProcessing () #6 0x00ee00a6 in qpid::management::ManagementBroker::Periodic::fire () #7 0x00eceafb in qpid::broker::Timer::run () #8 0x00a01c11 in qpid::sys::(anonymous namespace)::runRunnable () #9 0x45f0640b in start_thread () from /lib/libpthread.so.0 #10 0x45c71b7e in clone () from /lib/libc.so.6 Thread 4 (Thread -1218851936 (LWP 28313)): #0 0x45b8b410 in __kernel_vsyscall () #1 0x45f0b97e in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #2 0x45f08247 in _L_mutex_lock_340 () from /lib/libpthread.so.0 #3 0x45f0819f in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x45c7dd29 in pthread_mutex_lock () from /lib/libc.so.6 #5 0x00e715cb in qpid::broker::LinkRegistry::periodicMaintenance () #6 0x00e71fa8 in qpid::broker::LinkRegistry::Periodic::fire () #7 0x00eceafb in qpid::broker::Timer::run () #8 0x00a01c11 in qpid::sys::(anonymous namespace)::runRunnable () #9 0x45f0640b in start_thread () from /lib/libpthread.so.0 #10 0x45c71b7e in clone () from /lib/libc.so.6 Thread 3 (Thread -1229341792 (LWP 28314)): #0 0x45b8b410 in __kernel_vsyscall () #1 0x45f0967c in pthread_cond_timedwait@@GLIBC_2.3.2 () #2 0x45c7dbf4 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/libc.so.6 #3 0x00ecebf0 in qpid::broker::Timer::run () #4 0x00a01c11 in qpid::sys::(anonymous namespace)::runRunnable () #5 0x45f0640b in start_thread () from /lib/libpthread.so.0 #6 0x45c71b7e in clone () from /lib/libc.so.6 Thread 2 (Thread -1239831648 (LWP 28317)): #0 0x45b8b410 in __kernel_vsyscall () #1 0x45f0b97e in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #2 0x45f08247 in _L_mutex_lock_340 () from /lib/libpthread.so.0 #3 0x45f0819f in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x45c7dd29 in pthread_mutex_lock () from /lib/libc.so.6 #5 0x00e6f1bc in qpid::broker::LinkRegistry::declare () <--- LinkRegistry::lock required #6 0x00e6b2d1 in qpid::broker::Link::ManagementMethod () #7 0x00dd9ae5 in qmf::org::apache::qpid::broker::Link::doMethod () #8 0x00ede600 in qpid::management::ManagementBroker::handleMethodRequestLH () #9 0x00ee811c in qpid::management::ManagementBroker::dispatchAgentCommandLH () #10 0x00ee8650 in qpid::management::ManagementBroker::dispatchCommand () <--- ManagementBroker::userLock held #11 0x00eebbf3 in qpid::broker::ManagementExchange::route () #12 0x00eaab9d in qpid::broker::SemanticState::route () #13 0x00eabeea in qpid::broker::SemanticState::handle () #14 0x00ec3412 in qpid::broker::SessionState::handleContent () #15 0x00ec3a03 in qpid::broker::SessionState::handleIn () #16 0x00ec73eb in qpid::framing::Handler<qpid::framing::AMQFrame&>::MemFunRef<qp id::framing::Handler<qpid::framing::AMQFrame&>::InOutHandlerInterface, &(qpid::f raming::Handler<qpid::framing::AMQFrame&>::InOutHandlerInterface::handleIn(qpid: :framing::AMQFrame&))>::handle () #17 0x00a2ddaf in qpid::amqp_0_10::SessionHandler::handleIn () #18 0x00ec73eb in qpid::framing::Handler<qpid::framing::AMQFrame&>::MemFunRef<qp id::framing::Handler<qpid::framing::AMQFrame&>::InOutHandlerInterface, &(qpid::f raming::Handler<qpid::framing::AMQFrame&>::InOutHandlerInterface::handleIn(qpid: :framing::AMQFrame&))>::handle () #19 0x00e384ac in qpid::broker::Connection::received () #20 0x00bc3602 in ?? () #21 0xb380134c in ?? () #22 0x09d82608 in ?? () #23 0x09d851b8 in ?? () #24 0x45c7dd55 in pthread_mutex_unlock () from /lib/libc.so.6 #25 0x00ba3773 in ?? () #26 0xb38012a8 in ?? () #27 0x09d825f8 in ?? () #28 0x09d825f8 in ?? () #29 0xb6199dd8 in ?? () #30 0x000000c1 in ?? () #31 0x00ba47fd in ?? () #32 0x09d80568 in ?? () #33 0x09d825f8 in ?? () #34 0x00000002 in ?? () #35 0x00000001 in ?? () #36 0xb6199e68 in ?? () #37 0x00bac664 in ?? () #38 0x09d815e0 in ?? () #39 0x09d825f8 in ?? () #40 0xb6199e48 in ?? () #41 0x45c7dd55 in pthread_mutex_unlock () from /lib/libc.so.6 Thread 1 (Thread -1208158480 (LWP 28311)): #0 0x45b8b410 in __kernel_vsyscall () #1 0x45f0b97e in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #2 0x45f08247 in _L_mutex_lock_340 () from /lib/libpthread.so.0 #3 0x45f0819f in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x45c7dd29 in pthread_mutex_lock () from /lib/libc.so.6 #5 0x00ee012a in qpid::management::ManagementBroker::raiseEvent () <--- ManagementBroker::userLock required #6 0x00e6ac23 in qpid::broker::Link::established () #7 0x00e70805 in qpid::broker::LinkRegistry::notifyConnection () <--- LinkRegistry::lock held #8 0x00e3b57d in qpid::broker::Connection::Connection () #9 0x00bc80ee in ?? () #10 0x09d84dcc in ?? () #11 0x09d84d68 in ?? () #12 0x09d7ef88 in ?? () #13 0x09d85b24 in ?? () #14 0x00000001 in ?? () #15 0x00000000 in ?? () for any core registry.jsp (aka: "INFO" link) incorrectly lists info about last core declared registry.jsp is still using deprecated access to the singleton SolrCore.getSolrCore() ... easy fix. The CxfBusLifeCycleManager can cause ConcurrentModificationExceptions to be thrown The CxfBusLifeCycleManager has a list of listeners. in a number of occasions it iterates over this list and makes calls out to each listener. Each listener can then call back into the CxfBusLifeCycleManager and potentially try access the listeners. This causes the exception. Typically the exception thrown looks like. java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:449) at java.util.AbstractList$Itr.next(AbstractList.java:420) at org.apache.cxf.buslifecycle.CXFBusLifeCycleManager.preShutdown(CXFBusLifeCycleManager.java:81) at org.apache.cxf.bus.CXFBusImpl.shutdown(CXFBusImpl.java:122) at org.apache.cxf.testutil.common.AbstractBusClientServerTestBase.deleteStaticBus(AbstractBusClientServerTestBase.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.junit.internal.runners.BeforeAndAfterRunner.invokeMethod(BeforeAndAfterRunner.java:74) at org.junit.internal.runners.BeforeAndAfterRunner.runAfters(BeforeAndAfterRunner.java:65) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:37) at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:125) at org.apache.maven.surefire.Surefire.run(Surefire.java:132) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:818) The libhdfs append API is not coded correctly The hdfsOpenFile() API does not handle the APPEND bit correctly. Got an exception from ClientFinalizer when the JT is terminated This happens when we terminate the JT using _control-C_. It throws the following exception {noformat} Exception closing file my-file java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:193) at org.apache.hadoop.hdfs.DFSClient.access$700(DFSClient.java:64) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:2868) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:2837) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:808) at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:205) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:253) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1367) at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:234) at org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:219) {noformat} Note that _my-file_ is some file used by the JT. Also if there is some file renaming done, then the exception states that the earlier file does not exist. I am not sure if this is a MR issue or a DFS issue. Opening this issue for investigation. TestInjectionForSimulatedStorage occasionally fails on timeout Occasionally TestInjectionForSimulatedStorage falls into an infinite loop, waiting for a block to reach its replication factor. The log repeatedly prints the following message: dfs.TestInjectionForSimulatedStorage (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not enough replicas for 2th block blk_6302924909504458109_1001 yet. Expecting 4, got 2. [hive] null pointer exception with nulls in map-side aggregation thumbnail image names are inconsistent across Rails application and Fileloader. The rails application expects thumbnails to have the format e<number>t.jpg but the loader creates file with the format e<number>_thumb.jpg Fileloader NumberFormatException. I see the following fileloader.sh error on OpenSolaris: -bash-3.2# /export/faban/091008/faban/benchmarks/OlioDriver/bin/fileloader.sh Usage: /export/faban/091008/faban/benchmarks/OlioDriver/bin/fileloader.sh [concurrent users] <target directory> -bash-3.2# /export/faban/091008/faban/benchmarks/OlioDriver/bin/fileloader.sh 200 /export/olio/webapp/rails/trunk/public/uploaded_files Exception in thread "main" java.lang.NumberFormatException: For input string: "/export/olio/webapp/rails/trunk/public/uploaded_files" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:447) at java.lang.Integer.parseInt(Integer.java:497) at org.apache.olio.workload.fsloader.FileLoader.main(FileLoader.java:34) ERROR: File loader exited with code 1. -bash-3.2# /export/faban/091008/faban/benchmarks/OlioDriver/bin/fileloader.sh Usage: /export/faban/091008/faban/benchmarks/OlioDriver/bin/fileloader.sh [concurrent users] <target directory> db name in dbloader.sh needs to be up to date. The db name in dbloader.sh needs to be in sync with latest naming - ie., "olio" --- dbloader.sh.orig Tue Mar 3 17:03:06 2009 +++ dbloader.sh.olio Tue Mar 3 17:40:29 2009 @@ -34,7 +34,7 @@ export CLASSPATH $JAVA_HOME/bin/java -server org.apache.olio.workload.loader.LoadController com.mysql.jdbc.Driver \ -"jdbc:mysql://$DB_HOST/web20ror?user=web20&password=web20&relaxAutoCommit=true&sessionVariables=FOREIGN_KEY_CHECKS=0" $SCALE +"jdbc:mysql://$DB_HOST/olio?user=web20&password=web20&relaxAutoCommit=true&sessionVariables=FOREIGN_KEY_CHECKS=0" $SCALE EXIT_CODE=$? if [ "$EXIT_CODE" = 0 ] ; then echo "Database Load Successful" Caller application hangs in case it uses the polling (Response) method with JAX-WS async mapping and an http error occurs during sending. In case of callback style (AsyncHandler), the client application has no way of getting the exception. This happens if, for example, an http 404 occurs at sending. If the caller app wants to retrieve the response using Response<ResponseBean>.get(), it hangs forever. If it implements the AsynHandler<ResponseBean> method, handleResponse never gets called, which means that the app does not get notified of the exception. The attached patches are against http://fisheye6.atlassian.com/browse/~raw,r=651669/cxf/trunk/rt/core/src/main/java/org/apache/cxf/interceptor/ClientOutFaultObserver.java and http://fisheye6.atlassian.com/browse/~raw,r=743441/cxf/trunk/rt/frontend/jaxws/src/main/java/org/apache/cxf/jaxws/JaxWsClientProxy.java To Probe free ports dynamically for Unit test to replace fixed ports Currently hdfsproxy unit test uses Cactus in-container test. It uses three fixed ports. one for tomcat start-up/shut-down, another for tomcat http-port and the third for tomcat https-port. If theses ports are already in use, ant build will fail. To fix this, we decided to use a java program to probe the free ports dynamically and update the tomcat conf with these free ports. Use of enums in other namespaces breaks java generated code The current Java code generator assumes that all enumerated types you might use in your structs are in the current namespace, which could be incorrect. We should prefix the namespace if is different than the current one. helgrind thread issues identified in mt c client code helgrind generated a number of issues, I pulled a bunch of them. Most are related to the test, some are really issues with the mt zk client code though: valgrind --tool=helgrind --log-file=helgrind_mt.out ./zktest-mt ==31294== Thread #2: pthread_cond_{timed}wait called with un-held mutex ==31294== at 0x4027F8F: pthread_cond_wait@* (hg_intercepts.c:560) ==31294== by 0x404D881: pthread_cond_wait@GLIBC_2.0 (in /lib/tls/i686/cmov/libpthread-2.8.90.so) ==31294== by 0x4028037: pthread_cond_wait@* (hg_intercepts.c:574) ==31294== by 0x809EBB7: pthread_cond_wait (PthreadMocks.cc:54) ==31294== by 0x80ABCF6: notify_thread_ready (mt_adaptor.c:136) ==31294== by 0x80ABE90: do_io (mt_adaptor.c:277) ==31294== Possible data race during write of size 4 at 0x42E9A58 ==31294== at 0x8050D83: terminateZookeeperThreads(_zhandle*) (ZKMocks.cc:518) ==31294== by 0x805543B: DeliverWatchersWrapper::call(_zhandle*, int, int, char const*, watcher_object_list**) (ZKMocks.cc:261) ==31294== by 0x80520F7: __wrap_deliverWatchers (ZKMocks.cc:220) ==31294== by 0x80A287B: process_completions (zookeeper.c:1393) ==31294== by 0x80ABDAA: do_completion (mt_adaptor.c:332) ==31294== Possible data race during write of size 4 at 0xBEFF5F30 ==31294== at 0x80589AF: Zookeeper_watchers::ConnectionWatcher::~ConnectionWatcher() (TestWatchers.cc:54) ==31294== by 0x805D062: Zookeeper_watchers::testDefaultSessionWatcher1() (TestWatchers.cc:438) ==31294== by 0x805608C: CppUnit::TestCaller<Zookeeper_watchers>::runTest() (TestCaller.h:166) ==31294== Possible data race during write of size 4 at 0x42EB104 ==31294== at 0x80A03EE: queue_completion (zookeeper.c:1776) ==31294== by 0x80A3A44: zookeeper_process (zookeeper.c:1598) ==31294== by 0x80AC00B: do_io (mt_adaptor.c:309) ==31294== Thread #29: pthread_cond_{timed}wait called with un-held mutex ==31294== at 0x4027F8F: pthread_cond_wait@* (hg_intercepts.c:560) ==31294== by 0x404D881: pthread_cond_wait@GLIBC_2.0 (in /lib/tls/i686/cmov/libpthread-2.8.90.so) ==31294== by 0x4028037: pthread_cond_wait@* (hg_intercepts.c:574) ==31294== by 0x809EBB7: pthread_cond_wait (PthreadMocks.cc:54) ==31294== by 0x80AB9B3: wait_sync_completion (mt_adaptor.c:82) ==31294== by 0x80A1E82: zoo_wget (zookeeper.c:2517) ==31294== by 0x80A1F13: zoo_get (zookeeper.c:2497) core dump using zoo_get_acl() The zookeeper_process() function incorrectly calls the c.acl_result member of the completion_list_t structure when handling the completion from a synchronous zoo_get_acl() request. The c.acl_result member is set to SYNCHRONOUS_MARKER, which is a null pointer. The attached patch removes this call. NullPointerException during ASSIGN of complex node returned from XQuery I did a following xquery assign: <assign name="assign1"> <copy> <from expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xquery1.0"> <![CDATA[ for $loopOnce in (1) return <test:test1> <test:test2>abc</test:test2> </test:test1> ]]> </from> <to variable="myVar" part="TestPart"/> </copy> </assign> and run it in servicemix. I got: 13:26:57,703 | ERROR | pool-4-thread-1 | BpelEngineImpl | ode.bpel.engine.BpelEngineImpl 433 | Scheduled job failed; jobDetail={type=INVOKE_INTERNAL, mexid=65536, pid={http://ode/bpel/unit-test}HelloXQueryWorld-1} java.lang.RuntimeException: java.lang.NullPointerException at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:464) at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139) at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:847) at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:206) at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:215) at org.apache.ode.bpel.engine.BpelProcess.handleWorkEvent(BpelProcess.java:402) at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:424) at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:377) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:386) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:380) at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:208) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:379) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:376) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269) at java.util.concurrent.FutureTask.run(FutureTask.java:123) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.NullPointerException at org.apache.xerces.dom.CoreDocumentImpl.importNode(Unknown Source) at org.apache.xerces.dom.CoreDocumentImpl.importNode(Unknown Source) at org.apache.ode.bpel.runtime.ASSIGN.replaceElement(ASSIGN.java:489) at org.apache.ode.bpel.runtime.ASSIGN.copy(ASSIGN.java:416) at org.apache.ode.bpel.runtime.ASSIGN.run(ASSIGN.java:81) at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451) ... 17 more I noticed that this example run successfully in ODE tests. I saw that a difference is that evaluated node list is of Xerces DOM impl in ODE tests, and Saxon DOM impl in Servicemix. Unable to resolve scripting languages in OSGi environment The OsgiLanguageResolver.java created since CAMEL-1221 does not resolve scripting languages by using the default resolver declared in camel-script. The DefaultLanguageResolver, however in the non-osgi context, handles it properly. This leads to NullPointerException while launching a route in an OSGi container using a scripting language, e.g. javascript. Exception in thread "SpringOsgiExtenderThread-2" java.lang.NullPointerException at org.apache.camel.model.language.ExpressionType.createPredicate(ExpressionType.java:145) at org.apache.camel.model.ExpressionNode.createFilterProcessor(ExpressionNode.java:95) at org.apache.camel.model.WhenType.createProcessor(WhenType.java:57) at org.apache.camel.model.ChoiceType.createProcessor(ChoiceType.java:73) ... The start location can be different between the two vectors The start location can be different between the two vectors. Then, DenseVector add(double alpha, Vector v) for (int i = 0; i < this.size(); i++) { set(i, alpha * v.get(i) + get(i)); } This code will do not anything. DenseMatrix.setColumn(Vector v) also should be fixed. Unreferenced sessions should get garbage collected If an application opens many sessions and doesn't close them, they are never garbage collected. After some time, the virtual machine will run out of memory. This code will run out of memory after a few thousand logins: Repository rep = new TransientRepository(); for (int i = 0; ; i++) { rep.login(new SimpleCredentials("", new char[0])); } Using a finalizer to close SessionImpl doesn't work, because it seems there are references from the (hard referenced part of the cache) to the SessionImpl objects. Maybe it is possible to remove those references, or change them to weak references. GeneralizedTime.toString() generates wrong output when TimeZone has hours < 10 and minutes > 10 GeneralizedTime.toString() method when TimeZone has hours < 10 and minutes > 10. GeneralizedTime gt = new GeneralizedTime( "20090312123456+0130" ); System.out.println( gt ); The following snippet displays: 20090312123456+01030 Job files for a job failing because of ACLs are not clean from the system directory Jobs which failed because of ACLs gets added during JT restart recovery Successful taskid are not removed from TaskMemoryManager Successfully completed task-attempt-ids are not removed from TaskMemoryManager. This is after refactoring the code in tracker.reportTaskFinished into tip.reportTaskFinished, in HADOOP-4759 Large file download over webdav causes exception Downloading a large file (>2GB) from webdav causes an exception. (Note: uploading the file works ok, when jackrabbit is configured to use the filesystem DataStore.) When trying to retrieve the file with e.g. "wget", we get the following error: Gozer:Desktop greg$ wget --http-user=xxx --http-passwd=xxx http://localhost:8080/jackrabbit/repository/workbench/pkgs/demo/zip/zips/largetest-1.zip --08:59:50-- http://localhost:8080/jackrabbit/repository/workbench/pkgs/demo/zip/zips/largetest-1.zip => `largetest-1.zip' Resolving localhost... done. Connecting to localhost[127.0.0.1]:8080... connected. HTTP request sent, awaiting response... 500 For input string: "3156213760" 09:04:53 ERROR 500: For input string: "3156213760". In the server log we see this: 06.03.2009 08:59:50 *INFO * RepositoryImpl: SecurityManager = class org.apache.jackrabbit.core.security.simple.SimpleSecurityManager (RepositoryImpl.java, line 432) 2009-03-06 09:04:53.822::WARN: /jackrabbit/repository/workbench/pkgs/demo/zip/zips/largetest-1.zip java.lang.NumberFormatException: For input string: "3156213760" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:459) at java.lang.Integer.parseInt(Integer.java:497) at org.apache.jackrabbit.webdav.io.OutputContextImpl.setContentLength(OutputContextImpl.java:60) at org.apache.jackrabbit.server.io.ExportContextImpl.informCompleted(ExportContextImpl.java:192) at org.apache.jackrabbit.server.io.IOManagerImpl.exportContent(IOManagerImpl.java:157) at org.apache.jackrabbit.webdav.simple.DavResourceImpl.spool(DavResourceImpl.java:332) at org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.spoolResource(AbstractWebdavServlet.java:422) at org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.doGet(AbstractWebdavServlet.java:388) at org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.execute(AbstractWebdavServlet.java:229) at org.apache.jackrabbit.webdav.server.AbstractWebdavServlet.service(AbstractWebdavServlet.java:196) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) The problem seems to lie in OutputContextImpl.java it makes the mistake of potentially trying to parse a Long as an Integer, here: http://svn.apache.org/repos/asf/jackrabbit/trunk/jackrabbit-webdav/src/main/java/org/apache/jackrabbit/webdav/io/OutputContextImpl.java in the method setContentLength(long contentLength): public void setContentLength(long contentLength) { int length = Integer.parseInt(contentLength + ""); if (length >= 0) { response.setContentLength(length); } } I'm not sure, but a fix might be like this: public void setContentLength(long contentLength) { if(contentLength <= Integer.MAX_VALUE && contentLength >= 0) { response.setContentLength((int) contentLength); }else if (contentLength > Integer.MAX_VALUE) { response.addHeader("Content-Length", Long.toString(contentLength)); } } This would at least set the Content-Length header, and in some preliminary tests does seem to allow downloading the files. Temp files created for data over 64kb are never deleted CXF saves incoming data greater than 64kb are saved to a temp directory in a [cached output stream|http://svn.apache.org/viewvc/cxf/trunk/api/src/main/java/org/apache/cxf/io/CachedOutputStream.java?view=markup]. This makes sense since we don't want large messages to cause the JVM to run out of memory. The problem is, these never seem to be deleted at the moment. You can see this if you run the mtom sample. You can change the directory that the temp file is written to using the org.apache.cxf.io.CachedOutputStream.OutputDirectory property. Notice with the mtom sample that two files are written to the directory for the client and server. This is because the the client invokes twice, once with a byte array and once with a data handler. If you know the size of all messages that may be received you can avoid this problem by setting the threshold for creating a temp file to be larger than the largest expected file. The property for this (measure in bytes) is org.apache.cxf.io.CachedOutputStream.Threshold. In my investigation it seems that there are two possible problems here. The first is that for the byte array the output stream is locked for future use, but not closed. So, when we try to delete the file a stream is still in use and the file cannot be deleted. If you close the stream when marking it as locked this problem seems to go away, although I haven't fully tested this, so I am unsure of the knock on affects : {code} /** * Locks the output stream to prevent additional writes, but maintains * a pointer to it so an InputStream can be obtained * @throws IOException */ public void lockOutputStream() throws IOException { currentStream.flush(); outputLocked = true; // Not sure of the impact of this close - mtom sample still works fine. currentStream.close(); streamList.remove(currentStream); } {code} However for the datahandler it seems that there is never any attempt to delete the file. It seems that the datahandler has no knowledge of the temp file and even if I try to close the inputstream it is using from the mtom client mainline I still see the temp file is never deleted. Temporary files are not deleted under windows When obtaining packets of data from a webservice, temporary files created for binary data are not deleted when running under windows. Please see: CXF-1743 For a description of this issue. It only occurs under windows, nut under linux. OpenJPA PCEnhancer ant task failure cause Full EJB3 runtime mode failure Daytrader 2.2-snapshot EJB module build failure cause of missing some dependency package. The failure exception like: [INFO] [antrun:run {execution: default}] [INFO] Executing tasks [java] java.lang.NoClassDefFoundError: serp.bytecode.Instruction [java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:180) [java] at org.apache.tools.ant.taskdefs.Java.run(Java.java:710) [java] at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:178) [java] at org.apache.tools.ant.taskdefs.Java.execute(Java.java:84) [java] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [java] at org.apache.tools.ant.Task.perform(Task.java:364) [java] at org.apache.tools.ant.Target.execute(Target.java:341) [java] at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:108) [java] at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:83) [java] at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:499) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:478) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) [java] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) [java] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) [java] at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) [java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45) [java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) [java] at java.lang.reflect.Method.invoke(Method.java:599) [java] at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) [java] at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) [java] at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) [java] at org.codehaus.classworlds.Launcher.main(Launcher.java:375) [java] Caused by: java.lang.NoClassDefFoundError: serp.bytecode.Instruction [java] at java.lang.J9VMInternals.verifyImpl(Native Method) [java] at java.lang.J9VMInternals.verify(J9VMInternals.java:72) [java] at java.lang.J9VMInternals.initialize(J9VMInternals.java:134) [java] at java.lang.Class.forNameImpl(Native Method) [java] at java.lang.Class.forName(Class.java:169) [java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:119) [java] ... 26 more [java] Caused by: java.lang.ClassNotFoundException: serp.bytecode.Instruction [java] at org.apache.tools.ant.AntClassLoader.findClassInComponents(AntClassLoader.java:1166) [java] at org.apache.tools.ant.AntClassLoader.findClass(AntClassLoader.java:1107) [java] at org.apache.tools.ant.AntClassLoader.loadClass(AntClassLoader.java:983) [java] at java.lang.ClassLoader.loadClass(ClassLoader.java:609) [java] ... 32 more [java] --- Nested Exception --- [java] java.lang.NoClassDefFoundError: serp.bytecode.Instruction [java] at java.lang.J9VMInternals.verifyImpl(Native Method) [java] at java.lang.J9VMInternals.verify(J9VMInternals.java:72) [java] at java.lang.J9VMInternals.initialize(J9VMInternals.java:134) [java] at java.lang.Class.forNameImpl(Native Method) [java] at java.lang.Class.forName(Class.java:169) [java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:119) [java] at org.apache.tools.ant.taskdefs.Java.run(Java.java:710) [java] at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:178) [java] at org.apache.tools.ant.taskdefs.Java.execute(Java.java:84) [java] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [java] at org.apache.tools.ant.Task.perform(Task.java:364) [java] at org.apache.tools.ant.Target.execute(Target.java:341) [java] at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:108) [java] at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:83) [java] at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:499) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:478) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) [java] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) [java] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) [java] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) [java] at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) [java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45) [java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) [java] at java.lang.reflect.Method.invoke(Method.java:599) [java] at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) [java] at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) [java] at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) [java] at org.codehaus.classworlds.Launcher.main(Launcher.java:375) [java] Caused by: java.lang.ClassNotFoundException: serp.bytecode.Instruction [java] at org.apache.tools.ant.AntClassLoader.findClassInComponents(AntClassLoader.java:1166) [java] at org.apache.tools.ant.AntClassLoader.findClass(AntClassLoader.java:1107) [java] at org.apache.tools.ant.AntClassLoader.loadClass(AntClassLoader.java:983) [java] at java.lang.ClassLoader.loadClass(ClassLoader.java:609) [java] ... 32 more [INFO] Executed tasks unmarshaling of data always send the last occurence of the stream The unmarshaling method of BindyDataFormat (CSV or Key Value Pair) does not create a new model object for each new line of the stream readed Non-transactional datasource deployment descriptor use transactional definition in db2 and oracle deployment plan In DB2 and Oracle deployment plan, there are deployment descriptor like this: DB2: <connectiondefinition-instance> <name>jdbc/NoTxTradeDataSource</name> <config-property-setting name="UserName">trade</config-property-setting> <config-property-setting name="Password">trade</config-property-setting> <config-property-setting name="PortNumber">50001</config-property-setting> <config-property-setting name="ServerName">localhost</config-property-setting> <config-property-setting name="DatabaseName">tradedb</config-property-setting> <config-property-setting name="DriverType">4</config-property-setting> <connectionmanager> <xa-transaction> <transaction-caching/> </xa-transaction> <single-pool> <max-size>10</max-size> <min-size>0</min-size> <blocking-timeout-milliseconds>5000</blocking-timeout-milliseconds> <idle-timeout-minutes>30</idle-timeout-minutes> <match-one/> </single-pool> </connectionmanager> </connectiondefinition-instance> Oracle: <connectiondefinition-instance> <name>jdbc/NoTxTradeDataSource</name> <config-property-setting name="UserName">trade</config-property-setting> <config-property-setting name="Password">trade</config-property-setting> <config-property-setting name="DatabaseName">tradedb</config-property-setting> <config-property-setting name="DataSourceName">TradeDataSource</config-property-setting> <config-property-setting name="ServerName">localhost</config-property-setting> <config-property-setting name="PortNumber">1160</config-property-setting> <config-property-setting name="DriverType">thin</config-property-setting> <connectionmanager> <xa-transaction> <transaction-caching/> </xa-transaction> <single-pool> <max-size>10</max-size> <min-size>0</min-size> <blocking-timeout-milliseconds>5000</blocking-timeout-milliseconds> <idle-timeout-minutes>30</idle-timeout-minutes> <match-one/> </single-pool> </connectionmanager> </connectiondefinition-instance> Obviously, the snippet "<xa-transaction><transaction-caching/></xa-transaction>" is not correct for a non-transactional datasource. AxisCallback#onComplete is not called in OutInAxisOperation.NonBlockingInvocationWorker#run Hi, I'm trying to refactor my existing code and replace deprecated Callback interface with the new AxisCallback. I miss the call of AxisCallback#onComplete() while invoking of a non blocking service in OutInAxisOperation.NonBlockingInvocationWorker#run. I expect the onComplete call after the call of onMessage() or onFault() as it already done for Callback instance (setComplete()). I mean these lines: {code:} public void run() { try { // send the request and wait for response MessageContext response = send(msgctx); // call the callback if (response != null) { SOAPEnvelope resenvelope = response.getEnvelope(); SOAPBody body = resenvelope.getBody(); if (body.hasFault()) { // If a fault was found, create an AxisFault with a MessageContext so that // other programming models can deserialize the fault to an alternative form. AxisFault fault = new AxisFault(body.getFault(), response); if (callback != null) { callback.onError(fault); } else { axisCallback.onError(fault); } } else { if (callback != null) { AsyncResult asyncResult = new AsyncResult(response); callback.onComplete(asyncResult); } else { axisCallback.onMessage(response); } } } } catch (Exception e) { if (callback != null) { callback.onError(e); } else { axisCallback.onError(e); } } finally { if (callback != null) { callback.setComplete(true); } } } {code} Can't Add or Remove AppClient Project via GEP 1.Run a fresh new eclipse in new workspace, install GEP , create geronimo server runtime 2.Create a JEE application client attached, and then right-click G server runtime->"add and remove project", but it indicates "no project to add" 3.If i imported other war or ear, ejb jar, then "add and remove project", then all are listed to add and remove, if i choose app client, it will reminde me that: "The server does not support version 1.4 or 5 of the J2EE Application Client module specification." So fail to deploy app client to server via GEP Add support for installing from update site for IBM RAD v7.5 Now, in feature.xml of GEP, we have this snippet: <requires> <import feature="org.eclipse.jst" version="2.0.0" match="greaterOrEqual"/> </requires> Since no "org.eclipse.jst" feature exist in RAD , we have to replace "org.eclipse.jst" feature with the sub-features of "org.eclipse.jst". The section above can be replaced with this snippet: <requires> <import feature="org.eclipse.jst.common_core.feature" version="2.0.0.v200706041905-1007w311817231426" match="greaterOrEqual"/> <import feature="org.eclipse.jst.server_ui.feature" version="2.0.2.v200802150100-77-CT9yJXEkuiKVeQrclqTHQ3648" match="greaterOrEqual"/> <import feature="org.eclipse.jst.server_adapters.feature" version="2.0.2.v200802150100-787KE8iDUUEF6GwKwpHEQ" match="greaterOrEqual"/> <import feature="org.eclipse.jst.web_ui.feature" version="2.0.2.v200802150100-7B1DzCkuNa_RPevwkwB1iJ6z-0RH" match="greaterOrEqual"/> <import feature="org.eclipse.jst.enterprise_ui.feature" version="2.0.2.v200802150100-7b7_Es8EU6AXOV9QLJSees1SQoYQ" match="greaterOrEqual"/> </requires> The sole plugin of "org.eclipse.jst" feature and optional sub-feature "org.eclipse.jst.webpageeditor.feature" can't be found in the plugin list of RAD. GEP doesn't require these two items, then don't need to add them into the required section. samples\corba\bank_ws_addressing does not compile "ant cxf.server" won't build because of invalid code in samples\corba\bank_ws_addressing\src\cxf\server\BankImpl.java // TODO: What is the correct implementation for this operation? public void findAccount(javax.xml.ws.Holder<Object> accountDetails) { } correct code for sample // TODO: What is the correct implementation for this operation? public java.lang.Object findAccount(java.lang.Object accountDetails) { Object holder = "foo"; return holder; } not-attached error messages logged for clustered federation links E.g. error Channel exception: not-attached: receiving Frame[BEbe; channel=0; {ConnectionStartBody: server-properties={qpid.federation_tag:V2:36:str16(e934afe1-6cc0-416a-b16d-a1ca6e6a6d75)}; mechanisms=str16{V2:9:str16(ANONYMOUS), V2:5:str16(PLAIN)}; locales=str16{V2:5:str16(en_US)}; }]: channel 0 is not attached (qpid/amqp_0_10/SessionHandler.cpp:79) Everything still works so this is not critical, but its untidy and alarming for users to see errors. Outbound proxy server settings do not appear to be working The outbound proxy server settings, outline in this document - http://wso2.org/library/3346, do not appear to be working. For example, I tried this setting in the axis2.xml file: <axisconfig name="AxisJava2.0"> <parameter name="Proxy"> <Configuration> <ProxyHost>localhost</ProxyHost> <ProxyPort>8888</ProxyPort> </Configuration> </parameter> ....other stuff </axisconfig> My local outbound proxy that I was testing with, TinyProxy (available with most Linux distro's) doesn't show any activity (but I've confirmed it's working properly via some other programs) when I attempt to call a remote endpoint service. I also attempted using the Java system properties option, but that similarly didn't appear to be work. Unfortunately, a lot of companies are now restricting all outbound traffic except through a proxy, so this can be a significant problem. perftest doesn't check if connection has been closed due to error on shutdown So you get e.g.: Error in shutdown: Connection closed Error in shutdown: Connection closed where it tries to close a failed connection. row count getting printed wrongly When multiple queries are executed in same session, row count of the first query is getting printed for subsequent queries. idl2wsdl NullPointerException at typedef with sequence of (named) fixed array The idl2wsdl-tool throws a NullPointerException at org.apache.cxf.tools.corba.processors.idl.SimpleTypeSpecVisitor.visit(SimpleTypeSpecVisitor.java:75) when trying to convert following valid CORBA idl to wsdl: -- Begin IDL module idl2wsdlnullpointer { typedef string foo[1]; typedef sequence<foo> bar; }; -- END IDL Line 75 in the relevant source file contains: visitor.visit(node); The exception occurs because the visitor is still null after all three visitors failed to accept above node. I doublechecked fuse-services-framework-2.1.3.3. Same issue. Workaround ----------------- The following workaround does break neither client nor server code and allows the idl2wsdl-tool to fully generate a wsdl definition. -- Begin IDL Workaround module idl2wsdlnullpointer { typedef sequence<string> foo; typedef sequence<foo> bar; }; -- END IDL This workaround does not change generated Java ORB server-skeletons. It does however change e.g. generated Python client-stubs, thereby maybe affecting ORB/IIOP performance negatively (probably only marginally). CXF 2.2-SNAPSHOT exports "svn packages" For example in the manifest of cxf-bundle-jaxrs-2.2: Import-Package: ....... org.apache.cxf..svn.text-base;version="2.2.0.SNAPSHOT", org.apache.cxf.aegis;version="2.2.0.SNAPSHOT", org.apache.cxf. aegis.databinding;version="2.2.0.SNAPSHOT", org.apache.cxf.aegis.databinding..svn.prop-base;....... The ..svn something isn't right. Some extra config needed for wstrust13 test client The following tests should pass but the client config needs some reworking CXF component running in Payload mode does not work with Holders If you convert the CxfWsdlFirstTest to run in PAYLOAD mode (by simply changing the endpoint URI in Spring xml), the client.getPerson() invocation will fail. Cannot add headers to request from Dispatch client in async mode -- ThreadLocal issue org.apache.cxf.jaxws.DispatchImpl extends org.apache.cxf.jaxws.BindingProviderImpl, which has a requestContext field: protected ThreadLocal <Map<String, Object>> requestContext = new ThreadLocal<Map<String, Object>>(); Because the request context is in a ThreadLocal, changes made to it are not visible when you invoke the service via DispatchImpl's invokeAsync method. For example: dispatch.getRequestContext().put( BindingProvider.SOAPACTION_USE_PROPERTY, Boolean.TRUE ); dispatch.getRequestContext().put( BindingProvider.SOAPACTION_URI_PROPERTY, "uri:myAction" ); // If you call it this way, no SOAPAction header, verified by Wireshark: Response<StreamSource> response = dispatch.invokeAsync( request ); // But if you call it this way, you get the header StreamSource result = dispatch.invoke( request ); I can package up a test case if anyone thinks it would help fix this. Personally I know very little about CXF's internals, or the JAX-WS specs, but I can do whatever necessary to help. Erroneous class loading delegation to the application launcher classloader in some cases Here is an example stack trace: {code} ProcessStoreImpl-1@50 daemon, priority=5, in group 'main', status: 'RUNNING' at org.apache.felix.framework.searchpolicy.ModuleImpl.searchDynamicImports(ModuleImpl.java:1,215) at org.apache.felix.framework.searchpolicy.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:558) at org.apache.felix.framework.searchpolicy.ModuleImpl.access$100(ModuleImpl.java:59) at org.apache.felix.framework.searchpolicy.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1,382) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at org.apache.felix.framework.searchpolicy.ModuleImpl.getClassByDelegation(ModuleImpl.java:428) at org.apache.felix.framework.Felix.loadBundleClass(Felix.java:1,341) at org.apache.felix.framework.BundleImpl.loadClass(BundleImpl.java:737) at org.springframework.osgi.util.BundleDelegatingClassLoader.findClass(BundleDelegatingClassLoader.java:99) at org.springframework.osgi.util.BundleDelegatingClassLoader.loadClass(BundleDelegatingClassLoader.java:156) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at org.apache.xbean.classloader.MultiParentClassLoader.loadClass(MultiParentClassLoader.java:184) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374) at java.lang.ClassLoader.defineClass1(ClassLoader.java:-1) at java.lang.ClassLoader.defineClass(ClassLoader.java:675) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) at java.net.URLClassLoader.access$100(URLClassLoader.java:56) at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(AccessController.java:-1) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at org.apache.xbean.classloader.MultiParentClassLoader.loadClass(MultiParentClassLoader.java:200) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374) at org.apache.openjpa.util.ProxyMaps.afterEntrySet(ProxyMaps.java:74) at org.apache.openjpa.util.java$util$HashMap$proxy.entrySet(Unknown Source:-1) at org.apache.openjpa.util.ProxyMaps.values(ProxyMaps.java:65) at org.apache.openjpa.util.java$util$HashMap$proxy.values(Unknown Source:-1) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:335) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:283) at org.apache.openjpa.kernel.StateManagerImpl.cascadeDelete(StateManagerImpl.java:2,861) at org.apache.openjpa.kernel.BrokerImpl.delete(BrokerImpl.java:2,566) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:387) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:372) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:329) at org.apache.openjpa.kernel.SingleFieldManager.delete(SingleFieldManager.java:283) at org.apache.openjpa.kernel.StateManagerImpl.cascadeDelete(StateManagerImpl.java:2,861) at org.apache.openjpa.kernel.BrokerImpl.delete(BrokerImpl.java:2,566) at org.apache.openjpa.kernel.BrokerImpl.delete(BrokerImpl.java:2,531) at org.apache.openjpa.kernel.DelegatingBroker.delete(DelegatingBroker.java:1,046) at org.apache.openjpa.persistence.EntityManagerImpl.remove(EntityManagerImpl.java:659) at org.apache.ode.store.jpa.JpaObj.delete(JpaObj.java:34) at org.apache.ode.store.jpa.DeploymentUnitDaoImpl.delete(DeploymentUnitDaoImpl.java:114) at org.apache.ode.store.ProcessStoreImpl$3.call(ProcessStoreImpl.java:303) at org.apache.ode.store.ProcessStoreImpl$3.call(ProcessStoreImpl.java:300) at org.apache.ode.store.ProcessStoreImpl$Callable.call(ProcessStoreImpl.java:701) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:269) at java.util.concurrent.FutureTask.run(FutureTask.java:123) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:613) {code} The interesting bit is the following exerpt: {code} at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(AccessController.java:-1) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) {code} The current code in ModuleImpl#searchDynamicImports() does not really handle this case. The reason is that the {{java.net.URLClassLoader$1}} class is an anonymous PrivilegedExceptionAction. The result is that the loop is aborted too soon and the launcher classloader is used to delegate the call. In my application, it leads to all kinds of LinkageError being thrown. Note that this problem mostly happen on macs, which has a weird thing in the classloader, trying to handle some org.apache.crimson / org.apache.xalan / org.apache.xml / org.apache.xpath in some weird way. Anyway, I have a patch to support this anonymous classes I will attach now. CXFServlet / URIResolver tries to load file "" (empty file name) When I enable Java security, I get the following stack trace after allowing permission to 'cxf.xml' and '/WEB-INF/cxf-servlet.xml': java.security.AccessControlException: access denied (java.io.FilePermission read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.cxf.resource.URIResolver.tryFileSystem(URIResolver.java:158) org.apache.cxf.resource.URIResolver.<init>(URIResolver.java:84) org.apache.cxf.resource.URIResolver.<init>(URIResolver.java:72) org.apache.cxf.resource.URIResolver.<init>(URIResolver.java:68) org.apache.cxf.transport.servlet.CXFServlet.loadAdditionalConfig(CXFServlet.java:148) org.apache.cxf.transport.servlet.CXFServlet.updateContext(CXFServlet.java:134) org.apache.cxf.transport.servlet.CXFServlet.loadSpringBus(CXFServlet.java:101) org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:70) org.apache.cxf.transport.servlet.AbstractCXFServlet.init(AbstractCXFServlet.java:79) Looking through the code, I see that CXFServlet uses the URIResolver constructor that calls this("", path). (lines 67-69). Later in the tryFileSystem method, URIResolver null-checks the baseUriStr (line 154) and then attempts to analyze it. The first File.exists() call triggers the FilePermission exception. I believe that this can be fixed if the URIResolver constructor calls this(null, path) instead of this("", path). Granting read permission to "" *DOES* solve the issue as a workaround, but it's less than ideal - security policies are often scrutinized and something like that may raise flags. AbstractMessageResponseTimeInterceptor has protected methods, but default (package) scope constructor The class org.apache.cxf.management.interceptor.AbstractMessageResponseTimeInterceptor has several protected methods, indicating a desire to allow extension; however, the class cannot be extended by a class outside the package org.apache.cxf.management.interceptor, because it has an explicit constructor with no scope modifier (and thus is accessible only within the package). Presumably, this was an oversight, and the constructor should be protected, allowing clients of the API to extend the class. Otherwise, the protected modifier should be removed from the methods within the class, since it is meaningless, and consideration should be given to making the class non-public, since it will be unusable by anything but CXF internals (this would depend on the existing package structure of CXF, with which I am not familiar). Disposition is incorrectly parsed on multiparts imap messages Quoted-Printable Content-Transfer-Encoding does not get decoded The following is an MTOM attachment generated by SoapUI, a web testing tool: ------=_Part_6_1979395.1210796510882 Content-Type: text/xml; charset=Cp1252 Content-Transfer-Encoding: quoted-printable Content-ID: 606517570647 <?xml version=3D"1.0" encoding=3D"UTF-8"?> <MyXml attribute=3D"value"> </MyXml> The content-transfer-encoding is quoted-printable, and the attachment uses "=3D" throughout, which is a quoted-printable-encoded equals sign. CXF should handle the decoding of this attachment. However, when this attachment reaches my web service implementation after going through CXF's interceptors, the "=3D" characters are still included the document, rendering the XML invalid. [Hive] problem in count distinct in 1mapreduce job with map side aggregation SWTException (Widget is disposed) when disabling DIT Quick Search I'm getting a SWTException (Widget is disposed) when disabling DIT Quick Search and clicking back on the Servers view. DBDictionary.maxTableNameLength is not checked when using SynchronizeMappings Per Alan Raison's post to the dev mailing list there appears to be a problem with trimming table names when SynchronizeMappings is used. Here's the email that started the conversation : I have been writing a DBDictionary for the Ingres database and have been running the test cases. Ingres supports 32 character table names, and this has been set in the dictionary. However some tests have hit an error whereby the table name is too long for the database. I notice in the DBDictionary class there is a method called "getValidTableName" but this clearly isn't being used since it is trying to use a table name which is too long. Other databases (such as Oracle) also have quite a short maximum length for table names, so this problem must be able to overcome, but I can't see anything in other Dictionary classes. Is there anything special I should be doing to run the tests? I am currently running through mvn test. My draft DBDictionary class is attached along with a sample surefire report (with my username and password removed!) The full thread can be seen here : http://n2.nabble.com/OpenJPA-1.2.0-Test-Cases---Table-Name-too-Long-td2197132.html Loader does not complain if it couldn't load data. Run continues, and results in error messages at runtime, and consequently a failed run. MySQL, due to a misconfiguration, was not accepting connections from localhost - but this problem could occur in any scenario where the mysql server is not reachable or configured incorrectly. The Rails db loader did not abort the run. Instead it logged the error and went ahead, and this resulted in another slew of errors. If data loading fails, then the loader(or driver, I'm not sure how it's invoked) should log an exception and abort the run. There is zero chance of having a successful run if data loading fails. TaskTracker metrics are disabled HADOOP-3772 changed TaskTracker to use an instrumentation class, but did not update the default metrics class to the new API. TT metrics are currently discarded. If a component class is abstract, trying to instantiate it (by including it in a template) yields an inscrutable InstantiationError [ERROR] RequestExceptionHandler Processing of request failed with uncaught exception: com.formos.tapestry.tapx.datefield.components.DateField java.lang.InstantiationError: com.formos.tapestry.tapx.datefield.components.DateField at $Instantiator_11ffd289b27.newInstance($Instantiator_11ffd289b27.java) at org.apache.tapestry5.internal.structure.InternalComponentResourcesImpl.<init>(InternalComponentResourcesImpl.java:132) at org.apache.tapestry5.internal.structure.ComponentPageElementImpl.<init>(ComponentPageElementImpl.java:545) at org.apache.tapestry5.internal.structure.ComponentPageElementImpl.newChild(ComponentPageElementImpl.java:627) at org.apache.tapestry5.internal.pageload.ComponentAssemblerImpl.assembleEmbeddedComponent(ComponentAssemblerImpl.java:132) at org.apache.tapestry5.internal.pageload.PageLoaderImpl$12.execute(PageLoaderImpl.java:954) at org.apache.tapestry5.internal.pageload.ComponentAssemblerImpl.runActions(ComponentAssemblerImpl.java:193) at org.apache.tapestry5.internal.pageload.ComponentAssemblerImpl.assembleRootComponent(ComponentAssemblerImpl.java:88) at org.apache.tapestry5.internal.pageload.PageLoaderImpl.loadPage(PageLoaderImpl.java:159) at $PageLoader_11ffd289b02.loadPage($PageLoader_11ffd289b02.java) This is really not much to go on (I spun my wheels for about 30 minutes). The problem was that the DateField class was abstract. Tapestry should display an error message to the effect of: "This class is abstract and can not be instantiated.". Verify if JobHistory.HistoryCleaner works as expected Here is the piece of code I doubt {code} public void run(){ if (isRunning){ return; } now = System.currentTimeMillis(); // clean history only once a day at max if (lastRan ==0 || (now - lastRan) < ONE_DAY_IN_MS){ return; } lastRan = now; ..... // main code for cleaning } {code} {{lastRun}} is initialized to 0 and hence HistoryCleaner will never execute the main code. Also a testcase should be written for JobHistory.HistoryCleaner to check if it works as expected. mount-point="/" results in the disappearing of the hostname in the URL's of categories, products, etc. Changing the mountpoint of the ecommerce webapp to "/" affects the generation of the URL's of the catalog-items (categories, products, etc). THE HOSTNAME IS DISAPPEARED ! Setting mountpoint other than "/" results in correct URL's http://myhost:8080/ecommerce/catalog/FA-100/FA-100 http://neptune:8080/ecommerce/catalog/dropShip/dropShip Now when I set mount-point="/" then the links points to none-existing pages: http://catalog/FA-100/FA-100 http://catalog/dropShip/dropShip What I am expecting is: http://myhost:8080/catalog/FA-100/FA-100 http://neptune:8080/catalog/dropShip/dropShip I am sure I missed something here. Unnecessary and invalid import in GeocoderServlet The following import in the GeocoderServlet is unused. import org.apache.jasper.tagplugins.jstl.core.Out; However, having the import in the file complicates the build environment. It needs a certain version of the jasper jstl library. Removing it will decrease the fragility of building the Geocoder. Geocoder build.xml and build.properties template only builds on certain Tomcat versions The build.xml only builds with Tomcat 5.0 installed although there is no real dependency. Any servlet library should be sufficient to build the Geocoder. Moreover, the lib directory is not in the same path relative to tomcat.home for different tomcat versions and thus making the build environment much harder to create and maintain. Write pipeline recovery fails A write pipeline recovery fails on the error below: INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 53006, call recoverBlock(blk_1415000632081498137_954380, false, [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo; @4ec82dc6) from XX: error: org.apache.hadoop.ipc.RemoteException: java.io.IOException: blk_1415000632081498137_954380 is already commited, storedBlock == null. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:4487) at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:473) at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) Creation/configuration of ClientXDataSource fails because of two setSsl methods Applications using reflection (and JavaBean conventions) have problems configuring the Derby client data sources. Depending on how things are done, the user may or may not see the problems. For instance, some applications obtain all valid data source properties and list them with their default settings. In the case of SSL, this will be "Ssl" with value "off". When the application is trying to call setSsl("off") through reflection it may invoke setSsl(int) instead of setSsl(String), failing because "off" cannot be converted to an integer. In some implementations both methods will be invoked. There are two ways to look at this, and I don't know which one is correct: o the reflection code of the third-party applications using Derby isn't written well enough. o Derby is to blame for the problem by providing two setSsl-methods. I don't know if providing overloading setters violates the JavaBean spec, or any other relevant spec we should follow. The easiest technical solution is to rename one of the methods or possibly making one of them private. Both of these will break existing applications using that method to configure a Derby client data source. Is doing this, and providing a release note, sufficient? Does anyone see any other solutions? It should be noted that in some applications, it is impossible to configure ClientConnectionPoolDataSource or ClientXADataSource to use SSL. The reasons are the problem described here and DERBY-4067. One typical class of software with this problem is application servers. A workaround is to avoid setting the SSL property, which isn't doable if you need SSL of course... A related issue is whether it should be allowed to set the SSL property both through the setter method(s) and as a connection attribute. JAXP RI bundle should replace all implementations of ObjectFactory#findClassLoader() by return ObjectFactory.class.getClassLoader(); We only override the one from com.sun.org.apache.xalan.internal.xsltc.trax, but it may not be sufficient. Master/slave out of sync with multiple consumers I'm seeing exceptions like this in a simple master/slave setup: ERROR Service - Async error occurred: javax.jms.JMSException: Slave broker out of sync with master: Dispatched message (ID:DUL1SJAMES-L2-1231-1233929569359-0:4:1:1:207) was not in the pending list for MasterSlaveBug javax.jms.JMSException: Slave broker out of sync with master: Dispatched message (ID:DUL1SJAMES-L2-1231-1233929569359-0:4:1:1:207) was not in the pending list for MasterSlaveBug The problem only happens when there are multiple consumers listening to the queue, and is more likely to occur as there are more consumers listening. I've written a test program that demonstrates the problem. I start the master and slave with an empty data directory and let them both startup and settle. Then start the test program. The test program creates a specified number of consumers, and then starts queuing 256 messages. The consumers process the message by sending a reply. The producer counts the replies. Both consumers and the producer see all the messages, but with multiple consumers it is very likely that the error above will occur and several of the messages will still be queued on the slave. While debugging through the activemq code, I noticed that both the master and the slave dispatch the message to a consumer's pending list independently. In other words, it is possible that the master will add the message to consumer A's pending list and the slave will add the message to consumer B's pending list. Once the message has been processed by consumer A, the master sends a message to the slaving which specifies consumer A so that the slave can remove the message. The slave looks on its copy of consumer A's pending list and cannot find the message. As a result, it throws this exception and the message stays stuck on consumer B's pending list on the slave. Master and slave configurations along with MasterSlaveBug.java are attached to this issue. Start master and slave brokers: activemq xbean:master.xml activemq xbean:slave.xml Run with (only one consumer, the bug does not appear): java -classpath .:activemq-all-5.2.0.jar MasterSlaveBug 1 Run with (sixteen consumers, the bug does appear): java -classpath .:activemq-all-5.2.0.jar MasterSlaveBug 16 RAMDirectory Not Correctly Serilizing Greetings. Firstly a big thank you for everyones efforts with Lucene and Lucene.Net. Your efforts are much appreciated. Background: I have created a server application which allows searching across many companies. In order to achieve this I have utilized Lucene.Net for indexing and searching and NCache from Alachisoft for caching the information server-side. As the Lucene index takes a fair amount of time to create - I am also caching the RAMDirectory. The caching requires all objects to be serialized before storage. The issue: After retrieving the RAMDirectory from cache (after de-serializing) I attempted to create a new IndexWriter object to allow adding more items to the index: oDirectory = CacheConfig.DeCacheSupplierIndex("SupplierIndex" & Supplier.BuyerNo) analyzer = New StandardAnalyzer() oIndexWriter = New IndexWriter(oDirectory, analyzer, False) The attempt to create the IndexWriter resulted in a NullReference exception at: at Lucene.Net.Store.Directory.MakeLock(String name) at Lucene.Net.Index.IndexWriter.Init(Directory d, Analyzer a, Boolean create, Boolean closeDir) at Lucene.Net.Index.IndexWriter..ctor(Directory d, Analyzer a) After debugging the Lucene source I discovered the exception was caused by the lockFactory definition in the Directory class (Directory.cs line 49) having a [NonSerialized] attribute. This caused the lockFactory to be null after serialization. Fix: Removed the [NonSerialized] attribute. Added a [Serializable] attribute to SingleInstanceLockFactory (SingleInstanceLockFactory.cs line 35) Added a [Serializable] attribute to LockFactory (LockFactory.cs line 28) This allowed me to proceed. I have not thoroughly tested the changes. I can provide the source code if required. As we will very likely upgrade to future versions of Lucene - I would like to have any fix incorporated into the Lucene source repository. Let me know what I should do. URL to JIRA is incorrect in the doap.rdf file for ActiveMQ The URL to JIRA is incorrect in the doap.rdf file since the activemq.org TLD is no longer available. The doap.rdf file is parsed and used by the following site: http://projects.apache.org/projects/activemq.html KahaDB store - deadlock on shutdown little bit of deadlock between the shutdown and the checkpoint thread. {code} "ActiveMQ Journal Checkpoint Worker" prio=5 tid=0x01016f10 nid=0x8dd800 waiting for monitor entry [0xb0d8c000..0xb0d8cd90] at org.apache.activemq.store.kahadb.MessageDatabase.checkpointCleanup(MessageDatabase.java:468) - waiting to lock <0x095ad3e0> (a java.lang.Object) at org.apache.activemq.store.kahadb.MessageDatabase$3.run(MessageDatabase.java:261) "main" prio=5 tid=0x010013b0 nid=0xb0801000 in Object.wait() [0xb07ff000..0xb0800188] at java.lang.Object.wait(Native Method) - waiting on <0x0959f608> (a org.apache.activemq.store.kahadb.MessageDatabase$3) at java.lang.Thread.join(Thread.java:1113) - locked <0x0959f608> (a org.apache.activemq.store.kahadb.MessageDatabase$3) at java.lang.Thread.join(Thread.java:1166) at org.apache.activemq.store.kahadb.MessageDatabase.close(MessageDatabase.java:310) at org.apache.activemq.store.kahadb.MessageDatabase.unload(MessageDatabase.java:327) - locked <0x095ad3e0> (a java.lang.Object) at org.apache.activemq.store.kahadb.MessageDatabase.stop(MessageDatabase.java:173) at org.apache.activemq.util.ServiceStopper.stop(ServiceStopper.java:41) at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:519) at org.apache.activemq.JmsTestSupport.tearDown(JmsTestSupport.java:136) at junit.framework.TestCase.runBare(TestCase.java:130) at org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:90) at org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:96) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:118) at junit.framework.TestSuite.runTest(TestSuite.java:208) at junit.framework.TestSuite.run(TestSuite.java:203) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.maven.surefire.junit.JUnitTestSet.execute(JUnitTestSet.java:213) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:165) at org.apache.maven.surefire.Surefire.run(Surefire.java:107) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:289) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:993) {code} Vector.add(Vector v) should throw the Index out of bounds exception when they are different Vector.add(Vector v) should throw the Index out of bounds exception when they are different. wsdl2java NPE on Async WSDL When JavaMethod gets created for an operation from the WSDL suffixed with "Async", OperationProcessor:181 evaluates to true and line 182 throws an NPE because the method object's JavaReturn object was never populated with properties (method.getReturn().getClassName() == null). I don't see method's JavaReturn object ever getting set in the execution stack. OSGi bundles for components should not require the jaxp-ri bundle The TCCL is not set to the correct value after deploying a service unit See AbstractXBeanDeployer#deploy() which incorrectly set the TCCL when exiting the method. It should set it back to its previous value. vclreload account assumed to match the default affiliation There are several places in the code where getUserlistID is called for the vclreload user without specifying an affiliation. The published sql file sets up the affiliation for that account to be Local. If someone changes the default affiliation to something besides Local, all of those calls are broken. Invalid nested form tag name when the form is not visible and setoutputmarkupplaceholdertag(true) has been called when the an innerform is invisible and you called setoutputmarkupplaceholdertag(true), onComponentTag not processed and an invalid form tag name results: <form wicket:id=rootform> <form wicket:id=nestedform style="display: none"> </form></form> Component.render(final MarkupStream markupStream) { ... if (determineVisibility()) { // render -> replace form with div } else if (markupStream != null) { if (getFlag(FLAG_PLACEHOLDER)) { final ComponentTag tag = markupStream.getTag(); renderPlaceholderTag(tag, getResponse()); /////////////////////// Here form does not replace "form" with "div" } markupStream.skipComponent(); } Can't unsubscribe a durable subscription when there's a virtual topic present on the broker The scenario is: There's a virtual topic being used. (i.e. topic VirtualTopic.Orders). A consumer subscribes to any topic present on the broker and after some time it tries to unsubscribe the durable subscription. When it calls session.unsubscribe(consumerName) to unsubscribe a durable subscription, it receives the following exception: Caught: javax.jms.JMSException: org.apache.activemq.broker.region.virtual.VirtualTopicInterceptor cannot be cast to org.apache.activemq.broker.region.Topic javax.jms.JMSException: org.apache.activemq.broker.region.virtual.VirtualTopicInterceptor cannot be cast to org.apache.activemq.broker.region.Topic at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49) at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1244) at org.apache.activemq.ActiveMQConnection.unsubscribe(ActiveMQConnection.java:2052) at org.apache.activemq.ActiveMQSession.unsubscribe(ActiveMQSession.java:1431) at DurableSubscriber.consumeMessagesAndClose(DurableSubscriber.java:206) at DurableSubscriber.run(DurableSubscriber.java:112) at DurableSubscriber.main(DurableSubscriber.java:70) Caused by: java.lang.ClassCastException: org.apache.activemq.broker.region.virtual.VirtualTopicInterceptor cannot be cast to org.apache.activemq.broker.region.Topic at org.apache.activemq.broker.region.TopicRegion.removeSubscription(TopicRegion.java:139) at org.apache.activemq.broker.region.RegionBroker.removeSubscription(RegionBroker.java:409) at org.apache.activemq.broker.BrokerFilter.removeSubscription(BrokerFilter.java:98) at org.apache.activemq.broker.BrokerFilter.removeSubscription(BrokerFilter.java:98) at org.apache.activemq.broker.BrokerFilter.removeSubscription(BrokerFilter.java:98) at org.apache.activemq.broker.MutableBrokerFilter.removeSubscription(MutableBrokerFilter.java:105) at org.apache.activemq.broker.TransportConnection.processRemoveSubscription(TransportConnection.java:339) at org.apache.activemq.command.RemoveSubscriptionInfo.visit(RemoveSubscriptionInfo.java:83) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:305) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:179) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:68) at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:143) at org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:206) at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:203) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:185) at java.lang.Thread.run(Thread.java:619) The error happens on the following method of org.apache.activemq.broker.region.TopicRegion class: public void removeSubscription(ConnectionContext context, RemoveSubscriptionInfo info) throws Exception { SubscriptionKey key = new SubscriptionKey(info.getClientId(), info.getSubscriptionName()); DurableTopicSubscription sub = durableSubscriptions.get(key); if (sub == null) { throw new InvalidDestinationException("No durable subscription exists for: " + info.getSubscriptionName()); } if (sub.isActive()) { throw new JMSException("Durable consumer is in use"); } durableSubscriptions.remove(key); synchronized (destinationsMutex) { for (Iterator<Destination> iter = destinations.values().iterator(); iter.hasNext();) { Topic topic = (Topic)iter.next(); topic.deleteSubscription(context, key); } } super.removeConsumer(context, sub.getConsumerInfo()); } The virtual topic is present on the destinations collection being iterated but its type is not Topic and that is what causes the error. If there is no virtual topics present on the broker, unsubscriptions work well. Unable to find setter method for attribute: footerstyleClass Closely related to https://issues.apache.org/jira/browse/TOMAHAWK-345, but for the t:columns tag this time. Setting the footerstyleClass on the t:columns tag results in this error (sorry for the french message, means: Unable to find setter method for attribute: footerstyleClass ) [04/10/06 10:54:32:961 CEST] 00000022 WebApp E SRVE0026E: [Erreur de servlet]-[JSPG0218E: Erreur - La localisation de la mthode setter pour l'attribut footerstyleClass dans la classe de balises org.apache.myfaces.custom.crosstable.HtmlColumnsTag n'a pas abouti]: com.ibm.ws.jsp.JspCoreException: JSPG0218E: Erreur - La localisation de la mthode setter pour l'attribut footerstyleClass dans la classe de balises org.apache.myfaces.custom.crosstable.HtmlColumnsTag n'a pas abouti at com.ibm.ws.jsp.taglib.TagClassInfo.getParameterClassName(TagClassInfo.java:173) at com.ibm.ws.jsp.translator.visitor.generator.BaseTagGenerator.evaluateAttribute(BaseTagGenerator.java:327) at com.ibm.ws.jsp.translator.visitor.generator.BaseTagGenerator.generateSetters(BaseTagGenerator.java:216) at com.ibm.ws.jsp.translator.visitor.generator.CustomTagGenerator.startGeneration(CustomTagGenerator.java:342) at com.ibm.ws.jsp.translator.visitor.generator.GenerateVisitor.startGeneration(GenerateVisitor.java:683) at com.ibm.ws.jsp.translator.visitor.generator.GenerateVisitor.visitCustomTagStart(GenerateVisitor.java:392) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:253) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:254) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processChildren(JspVisitor.java:286) at com.ibm.ws.jsp.translator.visitor.JspVisitor.processJspElement(JspVisitor.java:125) at com.ibm.ws.jsp.translator.visitor.JspVisitor.visit(JspVisitor.java:110) at com.ibm.ws.jsp.translator.visitor.generator.GenerateJspVisitor.visit(GenerateJspVisitor.java:136) at com.ibm.ws.jsp.translator.JspTranslator.processVisitors(JspTranslator.java:121) at com.ibm.ws.jsp.translator.utils.JspTranslatorUtil.translateJsp(JspTranslatorUtil.java:169) at com.ibm.ws.jsp.translator.utils.JspTranslatorUtil.translateJspAndCompile(JspTranslatorUtil.java:82) at com.ibm.ws.jsp.webcontainerext.JSPExtensionServletWrapper.translateJsp(JSPExtensionServletWrapper.java:360) at com.ibm.ws.jsp.webcontainerext.JSPExtensionServletWrapper._checkForTranslation(JSPExtensionServletWrapper.java:329) at com.ibm.ws.jsp.webcontainerext.JSPExtensionServletWrapper.checkForTranslation(JSPExtensionServletWrapper.java:237) at com.ibm.ws.jsp.webcontainerext.JSPExtensionServletWrapper.handleRequest(JSPExtensionServletWrapper.java:144) at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:334) at com.sun.faces.context.ExternalContextImpl.dispatch(ExternalContextImpl.java:322) at com.sun.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:130) at org.apache.shale.view.faces.ViewViewHandler.renderView(ViewViewHandler.java:146) at org.jenia.faces.template.handler.ViewHandler.renderView(ViewHandler.java:76) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:200) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:117) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:198) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1289) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1241) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:136) at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:144) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:142) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:121) at com.michelin.xnet.client.thin.XnetEncodingFilter.doFilter(XnetEncodingFilter.java:68) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:142) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:121) at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:82) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:671) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:89) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1924) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:89) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:472) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:411) at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:101) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1471) Cross-site scripting vulnerability in Roller search term treatment The search term submitted to Roller as the value of the "q" parameter on search requests (/search?q=query+terms) is echoed back in the default search form without escaping HTML tags. This can be converted to a cross-site scripting attack. External JCR configuration file is not resolved properly The SlingServerRepository "Configuration File" property in is not resolved properly when "sling.repository.config.file.url" is point to local repository.xml file. For instance: sling.repository.config.file.url=C:\\jcr\\repository.xml is resolved to "\jcr\repository.xml". JBI components are not able to target NMR endpoints anymore I have deployed the cxf endpoint <jaxws:endpoint id="bookingService" implementor="org.eclipse.swordfish.samples.cxf.BookingServiceImpl" address="nmr:BookingService"> <jaxws:inInterceptors> <bean class="org.apache.cxf.interceptor.LoggingInInterceptor"/> </jaxws:inInterceptors> </jaxws:endpoint> And trying to access it via the http consumer <http:endpoint endpoint="httpConsumerEndpoint" service="httpConsumerEndpoint" targetService="swordfishCxf:BookingServiceImpl" soap="true" role="consumer" locationURI="http://0.0.0.0:8192/cxfsample/" defaultMep="http://www.w3.org/2004/08/wsdl/in-out" /> I'm getting the exception that no matching Endpoint can be found, because the exchange target's dynamic reference contains property jbi.internal=true (set in DeliveryChannelImpl: 252) And CXFEndpoint doesn't contain this property. I know that this is the anticipated behavior as CxfEndpoint is not jbi specific. But anyway it will be nice to integrate jbi extenral endpoints with the cxf one Jcr-Remoting: PathNotFoundException if item name ends with .json the jcr-remoting-servlet contains the following commented issue: * TODO: TOBEFIXED will not behave properly if resource path (i.e. item name) * TODO ends with .json extension and/or contains a depth-selector pattern. wsdl2java omits @WebParam's header=true where <wsdl:service> element is missing. If you generate code from the WSDL below (a slightly modified version of a WSDL retrieved from a running CXF service) using WSDLToJava, it fails to generate the header=true properties on the @WebParam annotations. The weird thing is that if you reinstate the <wsdl:service> element, the header=true property is correctly generated. bash-2.05b$ cd apache-cxf-2.1.4-SNAPSHOT/lib bash-2.05b$ cat > y.wsdl <<EOF > <?xml version="1.0" encoding="UTF-8"?> > <wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:ns1="http://schemas.xmlsoap.org/soap/http" xmlns:ns2="http://xml.ms.com/ns/eai/string-tcp" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="http://xml.ms.com/ns/msjava/greeter" xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="GreeterService" targetNamespace="http://xml.ms.com/ns/msjava/greeter"> > <wsdl:types> > <xsd:schema attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://xml.ms.com/ns/msjava/greeter" xmlns:tns="http://xml.ms.com/ns/msjava/greeter" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> > <xsd:element name="Blah" nillable="true" type="xsd:string"/> > <xsd:element name="GreeterException" type="tns:GreeterException"/> > <xsd:complexType name="GreeterException"> > <xsd:sequence/> > </xsd:complexType> > <xsd:element name="Greeter" nillable="true" type="xsd:string"/> > <xsd:element name="GreeterResponse" nillable="true" type="xsd:string"/> > </xsd:schema> > </wsdl:types> > <wsdl:message name="GreeterException"> > <wsdl:part element="tns:GreeterException" name="GreeterException"/> > </wsdl:message> > <wsdl:message name="GreeterResponse"> > <wsdl:part element="tns:GreeterResponse" name="GreeterResponse"/> > </wsdl:message> > <wsdl:message name="Greeter"> > <wsdl:part element="tns:Greeter" name="Greeter"/> > <wsdl:part element="tns:Blah" name="Blah"/> > </wsdl:message> > <wsdl:portType name="Greeter"> > <wsdl:operation name="Greeter"> > <wsdl:input message="tns:Greeter" name="Greeter"/> > <wsdl:output message="tns:GreeterResponse" name="GreeterResponse"/> > <wsdl:fault message="tns:GreeterException" name="GreeterException"/> > </wsdl:operation> > </wsdl:portType> > <wsdl:binding name="GreeterServiceSoapHttpBinding" type="tns:Greeter"> > <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> > <wsdl:operation name="Greeter"> > <soap:operation soapAction="" style="document"/> > <wsdl:input name="Greeter"> > <soap:header message="tns:Greeter" part="Blah" use="literal"/> > <soap:body parts="Greeter" use="literal"/> > </wsdl:input> > <wsdl:output name="GreeterResponse"> > <soap:body use="literal"/> > </wsdl:output> > <wsdl:fault name="GreeterException"> > <soap:fault name="GreeterException" use="literal"/> > </wsdl:fault> > </wsdl:operation> > </wsdl:binding> > <!--wsdl:service name="GreeterService"> > <wsdl:port binding="tns:GreeterServiceSoapHttpBinding" name="GreeterJettyHTTPPort"> > <soap:address location="http://localhost:7650/Greeter"/> > </wsdl:port> > </wsdl:service--> > </wsdl:definitions> > EOF bash-2.05b$ java -classpath $(find . -name "*.jar" | tr '\n' ':'). org.apache.cxf.tools.wsdlto.WSDLToJava y.wsdl Dec 18, 2008 5:08:45 PM org.apache.cxf.tools.validator.internal.WSDLRefValidator collectValidationPoints WARNING: WSDL document file:/a/lnn16f2/vol/lnn16f2v1/cs_msjava_build/mcclellc/apache-cxf-2.1.4-SNAPSHOT/lib/y.wsdl does not define any services bash-2.05b$ grep WebParam com/ms/xml/ns/msjava/greeter/Greeter.java import javax.jws.WebParam; @WebParam(partName = "Greeter", name = "Greeter", targetNamespace = "http://xml.ms.com/ns/msjava/greeter") @WebParam(partName = "Blah", name = "Blah", targetNamespace = "http://xml.ms.com/ns/msjava/greeter") Image reservation for sub-image with "nousercheckout" flag set During current reservation page - request doesn't show up Wrapping of non-OSGi components creates versions that may makes no-sense I have Hibernate 3.3.1.GA dependency in my project. Tuscany Maven-bundle-Plugin, - using DefaultArtifactVersion - creates versions like 0_0_0_3_3_1_GA. By introducing an OSGIArtifactVersion class, this problem is solved easily... defining a bean @Serializable will case javaassist exceptions If a bean implements the Serializable interface we get nasty javassist exceptions: org.apache.webbeans.exception.WebBeansException: java.lang.RuntimeException: by java.lang.ClassFormatError: Duplicate interface name in class file org/apache/webbeans/test/component/service/Typed2_$$_javassist_1 at org.apache.webbeans.proxy.JavassistProxyFactory.createNewProxyInstance(JavassistProxyFactory.java:82) at org.apache.webbeans.container.ManagerImpl.getInstance(ManagerImpl.java:353) at org.apache.webbeans.test.mock.MockManager.getInstance(MockManager.java:139) at org.apache.webbeans.test.unittests.SingletonComponentTest.testTypedComponent(SingletonComponentTest.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.RuntimeException: by java.lang.ClassFormatError: Duplicate interface name in class file org/apache/webbeans/test/component/service/Typed2_$$_javassist_1 at javassist.util.proxy.ProxyFactory.createClass3(ProxyFactory.java:342) at javassist.util.proxy.ProxyFactory.createClass2(ProxyFactory.java:312) at javassist.util.proxy.ProxyFactory.createClass(ProxyFactory.java:271) at org.apache.webbeans.proxy.JavassistProxyFactory.createNewProxyInstance(JavassistProxyFactory.java:77) ... 27 more Caused by: javassist.CannotCompileException: by java.lang.ClassFormatError: Duplicate interface name in class file org/apache/webbeans/test/component/service/Typed2_$$_javassist_1 at javassist.util.proxy.FactoryHelper.toClass(FactoryHelper.java:169) at javassist.util.proxy.ProxyFactory.createClass3(ProxyFactory.java:337) ... 30 more Caused by: java.lang.ClassFormatError: Duplicate interface name in class file org/apache/webbeans/test/component/service/Typed2_$$_javassist_1 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javassist.util.proxy.FactoryHelper.toClass2(FactoryHelper.java:181) at javassist.util.proxy.FactoryHelper.toClass(FactoryHelper.java:163) ... 31 more allow passivation of scopes @SessionScoped and @ConversationScoped Since sessions of a ServletContainer may be serialised and persisted to disk on shutdown, SessionScoped and ConversationScoped context may be passivated Blocks remain under-replicated Occasionally we see some blocks remain to be under-replicated in our production clusters. This is what we obeserved: 1. Sometimes when increasing the replication factor of a file, some blocks belonged to this file do not get to increase to the new replication factor. 2. When taking meta save in two different days, some blocks remain in under-replication queue. Remove 2 doc files: hello.pdf and overview.html Please remove these 2 doc files. They don't belong with the Pig 2.0 documnetation and will cause confusion. (1) hello.pdf ... located in: trunk/src/docs/src/documentaiton/content/xdocs (2) overview.html ... located in: trunk/docs edit reservation allows saving/updaing image for cluster reservations The page displayed by clicking the Edit button for a reservation on the Current Reservations page shouldn't allow the saving or updating of cluster reservations. error in how the end time for schedule times is computed Sometimes it is useful to make computers completely unavailable by putting them in a schedule that is only available to a few minutes out of a week instead of putting them into the maintenance state. When creating a schedule with a start time of Sunday at 12:00 am (which translates to 0 as the start field in the scheduletimes table) and an end time of Sunday at 12:15 am, the end field in the scheduletimes table gets computed as 10095 instead of 15. watchInFlight error appears if image description contains special characters The browser displays a XMLHttpTransport.watchInFlight error when you attempt to create a reservation for an image if the image.description field contains an apostrophe. apostrophe in image name causes AJAX updates to privilege page to break If an image name has an apostrophe in it, it causes AJAX updates to the privilege page to form invalid javascript because the apostrophe closes the single quoted string too early. HttpUtils#isXml fails if charset specified in header HttpUtils#isXml does not handle properly a Content-type header with a charset: Content-Type: text/xml;charset=UTF-8 Bad example code in documentation http://logging.apache.org/log4net/release/config-examples.html The last code example under "RollingFileAppender" seems to be wrong. Shouldn't it be: <rollingStyle value="Once" /> Portlet Dispatching loses wrappers When you dispatch using a wrapped request/response object, pluto doesn't preserve the wrapping when it executes the dispatch. I.e. it upwraps the request/response and dispatches on that. This prevents portlets from filtering request/responses to/from dispatched/servlet entities. It would be nice if we added a TCK test for this case as well. The spec is clear that one can use a wrapped request/response to dispatch to. Though it doesn't specifically state that this must be preserved, it not only is the reasonable interpreation/expectation but is what clients will be counting on. Hence for the sake of interoperability, having a TCK test will catch this problem early. Setting the ResourceResponse character encoding requires to support locale-encoding-mapping-list from web.xml See: Portlet API 2.0 PLT.12.7.1 Setting the Response Character Set. To be able to implement this correctly, the PortletResponse implementation needs access the locale-encoding-mapping-list definition from web.xml... This means the PortletApplicationDefinition needs to be expanded to also hold these locale encoding mappings retrieved from web.xml, which needs to be implemented in the PortletAppDescriptorServiceImpl (for Pluto). RequestDispatcher path query string parameter handling too limited and broken with nested dispatches Below copied (in part) from email discussion on the Pluto dev list, see also: http://www.nabble.com/More-required-Pluto-2.0-SPI-and-implementation-refactoring-issues-td21973310.html *** RequestDispatcher path query string parameter handling PLT.19.4.1 specifies that "The request dispatching mechanism is responsible for aggregating query string parameters when forwarding or including requests". This requirement has been implemented very literally in Pluto: a PortletRequestDispatcher path query string is "stored" in the request instance and on subsequent access to the parameter map from within the invoked servlet parsed and merged on top of the portlet parametermap. This "manual" processing and merging of query string parameters however breaks the Servlet spec requirements! It goes wrong if from within the dispatched servlet a subsequent dispatch is performed with additional query string parameters. In the current solution, this possible usage hasn't been taken into account, with as result you'll *always* receive the same parametermap on subsequent (nested) dispatches (for example from within a JSP including another JSP). But, even if the servletrequest.getRequestDispatcher(path) would be overridden to "fix" this problem, that still won't solve it for two reasons: - ServletContext.getRequestDispatcher(path) *cannot* be overridden, therefore still leaving a "hole" in the solution - there is no "returned from dispatch" callback mechanism to "revert" the current parametermap after a dispatch Therefore, "manual" parsing and merging of dispatcher query string parameters *cannot* be used to implement this spec requirement. However, the correct way to implement this is actually extremely simple: just let the servlet container handle it all by itself! There is no need to "store", parse and merge dispatcher query string parameters, and in the servletrequest(wrapper).getParameterMap(), just return super.getParameterMap() which the container will have "injected" with the additional query string parameters merged already. Jetspeed-2 has used this solution from the beginning (with some additional fancy cache handling) and it just works as expected. Changing to this solution will dramatically simplify the current implementation, especially after the PortletRequest and ServletRequest implementations are split up (see: PLUTO-529). (side note: I actually wrote a testcase for this and this spec requirement is broken in most other open-source portlet containers as well!) PortletRequest/PortletResponse implementations extending HttpServletRequest/Response wrappers causes "indentity" problems when accessed from servlets Below copied (in part) from email discussion on the Pluto dev list, see also: http://www.nabble.com/More-required-Pluto-2.0-SPI-and-implementation-refactoring-issues-td21973310.html *** InternalPortletRequest/Response implementations (and subclasses thereof) extending HttpServletRequest/ResponseWrapper This solution (dating back from Pluto 1.0 implementation) has a very tricky but serious flaw. By using a single instance HttpServletRequestWrapper instance for both the PortletRequest and dispatched ServletRequest, a dispatched servlet retrieving the current PortletRequest (or Response) using HttpServletRequest.getAttribute("javax.portlet.request") as specified by the Portlet specification (PLT.19.3.2), will actually return the *current* HttpServletRequestWrapper itself again. So far, nothing wrong yet. But, as the InternalPortletRequestImpl (which is the real implementation class) also maintains internal instance state concerning its dispatched state and based upon that decides how overlapping methods need to behave, the PortletRequest object retrieved like this from within a servlet environment actually behaves as a dispatched ServletRequest. This is *not* compliant with the Portlet specification, even if the current JSR-286 TCK doesn't (properly) test against this. The only solution to solve this is *not* using a piggy back solution for the dispatched ServletRequest/Response objects, but use independent instances for the PortletRequest/Response and wrap these within the dispatched ServletRequest/Response objects. This is a rather big change, but really required. On the bright side, doing so will result in a much more readable/maintainable solution as the current implementation has to maintain some tricky state flags to keep track of its "identity". Getting rid of all that and moving the dispatched servlet specific handling in separate classes will make this much easier and transparent to deal with. ganglia metrics for 'requests' is confusing the 'requests' metric is incremented for every request, but it is reset and published every interval. Which means the number is actually 'requests per interval' which is a config value in hbase. HBase should export 'requests/second' instead. ganglia metrics should have a common prefix so we can group easier The metrics exported are intermixed with other ganglia metrics... some of the names used are very common like 'requests'. Instead we should use a prefix like "hbase_" so they appear both grouped and separate in ganglia UIs. Default scope should be "page" Currently, for a possible bug in Velocity or Velocity Tools, the default scope is "request" Follow the discussion at Velocity Users: http://markmail.org/message/wtxhjwonysviasws?q=tools+2%2E0+list:org%2Eapache%2Evelocity%2Euser+from:%22Antonio+Petrelli%22&page=1 The default scope should be "page". GShell itests fail when local Maven repo is specified through a system property When the GShell itests are run with another Maven repo specified through the maven.repo.local system property, the tests fail. This is what happens e.g. in Hudson or TeamCity. mvn eclipse:eclipse fails on aar and mar projects This issue was originally reported here: http://markmail.org/message/4bpdnwvx5v4rpt6d When mvn eclipse:eclipse is executed on a Maven project with aar or mar packaging it fails to create the .classpath file, and the .project file is basically empty (no natures and no builders). This makes it impossible to import the project into Eclipse. I solved a similar problem in Synapse by correcting the Plexus component descriptor of the corresponding Maven plugin (see r709895). This issue might also be related to AXIS2-2647. Synapse should not startup having a LoadbalanceEndpoint without children Synapse should not startup having a LoadbalanceEndpoint without children as problems are defered to at runtime. Without children no algorith can be detected causing NPEs at runtime. Simple patch is attached. Creating a new User does not result in the geocoder getting called, but creating a new event does. Creating a new user does not result in a call to the geocoder. This issue may be a task that is work in progress. In any case, it's not documented, and hence I'm filing this issue. ( Creating a new event, of course, appropriately involves the geocoder. ) ADBXMLStreamReaderImpl generates unexpected START_DOCUMENT events When trying to build Axis2 trunk with Axiom 1.2.9-SNAPSHOT (instead of 1.2.8), I noticed a regression in RPCCallTest#testCompanyArray related to the fact that StAXOMBuilder#next() no longer accepts START_DOCUMENT events: if a START_DOCUMENT event is received from the underlying parser in the middle of the document, an exception is triggered (This behavior changed in r744780). Obviously this should never happen if the XMLStreamReader behaves correctly, but under some circumstances ADBXMLStreamReaderImpl generates this type of events. Rails version should use same FileLoader as PHP The rails driver (harness hook specifically) doesn't use the FileLoader to re-load media files. Also, the fileloader.rb is outdated and should be removed as we've swtiched to using fileloader.sh. Tags that originate from Twitter API are present twice Just tested the new changes from Vassil and the @[user] text gets through but the tags are ceated twice. See screenshots. Event literature pdf names are inconsistent with what the application expects. fileloader.sh generates event literature of the form e<number>.pdf but the application expects files of the form e<number>l.pdf Driver is not count event images correctly in the doEventDetail method In the RoR driver the count of the event images was only being done if the user was attending and then failing because there were not enough images being loaded. Driver does not check for accuracy in throughput to determine pass/fail The driver only checks whether the various metrics reported in the summary report have passed the necessary criteria but does not check whether the resulting throughput (ops/sec) is correct for the scale of the run (number of concurrent users). Thus for example, it is possible to get 150 ops/sec for a run done with 1000 users and be reported as PASSED. TModelBag TModelKeyArray should not be null In RegistryImpl.findBinding, we set the TModelBag, but do not give it a default TModelKeyArray. SOA Registry Foundation requires the TModelBag's TModelKeyArray to be set or otherwise it will throw an error. Rails Driver does not check the responseBody of doAddPerson or doAddEvent POSTs for flash messages indicating an error occured When running into a problem with MySQL and duplicate keys which for most of a run resulted in no users or events being inserted into the database. It was seen that this kind of failure was not picked up by the Rails Driver and therefore doAddPerson and doAddEvents oeprations were flagged as being successful regardless of whether inserts were made or not. The symptom to look for is that if the add fails, the status returned by the HTTP POST is SC_OK and not SC_MOVED_TEMPORARILY (i.e.: a redirect) and a flash message is added to the page, either: "Could not create event" for failing to add an event, or "Failed to create user" for failing to add a user. History files are given world readable permissions. It is found that history files are being created with permissions 0777. On shared clusters this is opening up too much. However, there is a requirement to allow Chukwa to read history files. (See HADOOP-4705). This issue is to set up appropriate permissions for the files to be as restrictive as required, while still fixing the problem for Chukwa. Operational attributes turned on by itself Hi Working against Novell eDirectory 8.7.3.9 When I click the New attribute button and add the userPassword attribute Directory Studio automatically turns on the display of operational attributes and I have to go to Preferences>LDAP>Attributes>Entry editor to turn them off again. Edit party rates screen breaks because it has been not converted to BigDecimal Ajax Data Grid (LiveGrid) example is not working. The LiveGrid example is not working under the latest stable FireFox version. (It can be seen in the examples of the Avoka server) The ScrolBar for the Grid is not displayed, nor does it work (i.e. is not paginating/scrolling while has the focus and the mose is scrolling or on page up/down). There's no javascript error in the Error Console either. I can remember that this example used to work (with previous versions of the Firefox). Thank you, A. QueryRunner is not thread-safe sebb pointed out: "QueryRunner Javadoc says the class is thread-safe. However it has a protected mutable variable DataSource which can also be set/got via public methods. If one thread sets the variable, another may not see the correct value, so the class is not thread-safe." We should make the DataSource final and remove the setter. Dependent Context Inifinite Recursion If there are two @Dependent webbeans with depends each other, it creates a infinite recursion. Implement latest spec getInstanceToInject method to resolve the problem. It is a known issue for M1 release. Field.setValue(...) doesn't properly handle switching between byte[] and other types This came up in PyLucene testing, based on Lucene 2.4.1. Thread here: http://pylucene.markmail.org/message/75jzxzqi3smp2s4z The problem is that Field.setValue does not fix up the isBinary boolean, so if you create a String field, and then do setValue(byte[]), you'll get an exception when adding a document containing that field to the index. acc 2 Spec: Manager.addContext() has to throw IllegalArgumentException if there is more then 1active context exception when running OpenWebBeans + MyFaces apps in jetty When starting the guess sample with $>mvn -Pjetty clean install jetty:run we get the following exception when navigating to the application at http://localhost:8080/guess 11:40:19,386 INFO WebBeansLifeCycle:103 - Initializing of the Request Context with Remote Address : 127.0.0.1 2009-02-14 11:40:19.390::WARN: EXCEPTION java.lang.IllegalStateException: No SessionHandler or SessionManager at org.mortbay.jetty.Request.getSession(Request.java:1131) at org.mortbay.jetty.Request.getSession(Request.java:1121) at org.apache.webbeans.context.ContextFactory.initRequestContext(ContextFactory.java:85) at org.apache.webbeans.lifecycle.WebBeansLifeCycle.requestStarted(WebBeansLifeCycle.java:67) at org.apache.webbeans.servlet.WebBeansConfigurationListener.requestInitialized(WebBeansConfigurationListener.java:66) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:752) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) Tag inherited from tag class in component hierarchy does not inherit properties [myfaces builder plugin] Trinidad has a tag called tr:componentRef. Its implementation: org.apache.myfaces.trinidadinternal.taglib.ComponentRefTag inherits from: org.apache.myfaces.trinidadinternal.taglib.UIXComponentRefTag which it is tied to UIXComponentRef In tomahawk, the strategy is create an abstract tag class that can be include before the generated tag class. But this case could be also valid. The solution is when the tree is flattened, inspect tags and if some tag inherit from a component tag class, convert component properties to attributes and inject on tag meta. If subject of verification mail does not exists then also functionality should not be broken. If Subject of verification mail is defined in data, then while displaying the value of 'title' error occurs, & blank mail is sent. TestParallelInitialization failed on NoSuchElementException java.util.NoSuchElementException at java.util.AbstractList$Itr.next(AbstractList.java:350) at java.util.Collections.sort(Collections.java:162) at org.apache.hadoop.mapred.EagerTaskInitializationListener.resortInitQueue(EagerTaskInitializationListener.java:162) at org.apache.hadoop.mapred.EagerTaskInitializationListener.jobAdded(EagerTaskInitializationListener.java:137) at org.apache.hadoop.mapred.TestParallelInitialization$FakeTaskTrackerManager.submitJob(TestParallelInitialization.java:142) at org.apache.hadoop.mapred.TestParallelInitialization.testParallelInitJobs(TestParallelInitialization.java:185) two rows can be inserted with the same value in a column that a unique constraint on that column should prevent The following DDL allows two rows to be inserted with the same value in a column when a unique constraint on that column should prevent it. The select statement (see the end of the mail) produces: ij> ALBUMID |RANK |YEARRELEAS&|ALBUM -------------------------------------------------------------------------------- ----------------------------------------------------------------- 11100 |1 |1945 | 13300 |1 |1966 | 2000 |7 |1974 |Songs in the Key of Life 88000 |12 |1971 | 4 rows selected The first two rows have the same rank value of 1 despite there being a unique constraint on that column. derby version: 10.4.1.3 Bryan Pendleton reproduced this and suggested that the problem "is related to the fairly new feature of Derby which allows definition of a unique constraint on a null-able column". https://issues.apache.org/jira/browse/DERBY-3330 Redefining the rank column as 'not null' made the problem go away. I came across this after running a program that randomly makes inserts, updates, and deletes into this table. It usually takes between 500-600 DDL statements to make it happen. I then took the results and hand-pruned out as many statements as I could and tried to minimize the number of rows produced by the select statement, while still reproducing the issue. At this point it is very sensitive to any changes. For example, re-running the test after removing what appear to be redundantly inserted rows will make the problem go away, as will modifications to band and album names. It's all very strange. A very old version of Cloudscape (3.6.9), from which I am trying to upgrade, does not have this problem. ------------------------------------------------------------------------- drop table tra; create table tra ( albumId bigint, rank int, CONSTRAINT UNIQUE_RANK UNIQUE(rank), band varchar(100), album varchar(100), yearReleased int, CONSTRAINT PK_TOPROCKALBUMS PRIMARY KEY(albumId) ); insert into tra values(1000, 1, '', '', 1966); insert into tra values(2000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(3000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(4000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(5000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(6000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(7000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(8000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(9000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(13000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(14000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(15000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(16000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(17000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(18000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(19000, 14, 'Joni ', 'Blue', 1971); insert into tra values(20000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(21000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(22000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(23000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(24000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(25000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(26000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(27000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(28000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(29000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(30000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(31000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(32000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(33000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(34000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(36000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(36000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(37000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(38000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(39000, 1, 'The Beatles', '', 1966); insert into tra values(40000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(41000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(42000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(43000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(44000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(45000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(46000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(47000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(48000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(49000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(50000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(51000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(52000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(53000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(54000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(55000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(56000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(57000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(59000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(60000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(61000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(62000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(63000, 1, 'The Beatles', '', 1966); delete from tra where rank=1; insert into tra values(64000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(65000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(66000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(67000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(68000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(69000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(70000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(71000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(72000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(73000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(74000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(75000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(76000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(77000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(78000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(79000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); delete from tra where rank=14; insert into tra values(80000, 14, 'Joni Mitchell', 'Blue', 1971); insert into tra values(81000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(82000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(83000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(84000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(85000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(86000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(87000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(88000, 12, '', '', 1971); insert into tra values(89000, 1, 'The Beatles', '', 1966); insert into tra values(90000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(91000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(92000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(93000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(94000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(95000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(96000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(97000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(98000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(99000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10100, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10200, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10300, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10400, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10500, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); delete from tra where rank=1; insert into tra values(10600, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10700, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10800, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(10900, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11100, 1, 'The Beatles', '', 1966); insert into tra values(11200, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11300, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11400, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11500, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11600, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11700, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11800, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(11900, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12100, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12200, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); delete from tra where rank=14; insert into tra values(12300, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12400, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); update tra set yearReleased=1945 where rank=1; insert into tra values(12500, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12600, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12700, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12800, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(12900, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(13000, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(13100, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(13200, 7, 'Stevie Wonder', 'Songs in the Key of Life', 1974); insert into tra values(13300, 1, 'The Beatles', '', 1966); select albumId, rank, yearReleased, album from tra order by rank; exit; UidChangeTracker, UidToMsnConverter leak memory Profiling indicates that UidToMsnConverter is a memory hog. In addition, MailboxListeners are not released when a session finished. This results in a memory leak which (over time) produces OutOfMemory exceptions on the server. camel-cxf, camel-msv and camel-stringtemplate can not be installed Some of the features for Camel components can not be installed: - camel-cxf - camel-msv - camel-spring-integration - camel-stringtemplate features-maven-plugin should not override well-known bundles with auto-discovered bundles Currently, if the bundles files specifies e.g. version 2.5.6 of a Spring JAR, an auto-discovered bundle may just override that to e.g. 2.5.5 Shuffle copiers do not return Codecs back to the pool at the end of shuffling At the end of shuffle, the copiers should return back the codecs to the pool. This doesn't happen and can potentially lead to a lot of memory leak on the reduce task (depending on how many shuffle copiers there are). trying to access an ObjectMessage in the AMQ web console results in java.io.IOException: com.myclass... The web console can be used to drill into individual message and display both their properties as well as payload. Displaying the payload generally works fine for TextMessages, etc but fails for ObjectMessages and probably other binary payload formats. This is okay and kind of expected but rather than throwing an exception with a large stack trace back to the browser client, we should capture the ex and display at least the message properties and also a short message that the content cannot be displayed due to its binary content. Error msg and stack trace that is thrown: {code} java.io.IOException: com.myclass at org.apache.activemq.command.ActiveMQObjectMessage.getObject(ActiveMQObjectMessage.java:179) at org.apache.activemq.web.MessageQuery.getBody(MessageQuery.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at javax.el.BeanELResolver.getValue(BeanELResolver.java:261) at com.sun.el.parser.AstValue.getValue(AstValue.java:138) at javax.el.CompositeELResolver.getValue(CompositeELResolver.java:143) at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:206) at org.apache.jasper.runtime.PageContextImpl.evaluateExpression(PageContextImpl.java:984) at org.apache.jsp.message_jsp._jspx_meth_c_out_0(org.apache.jsp.message_jsp:400) at org.apache.jsp.message_jsp._jspx_meth_c_otherwise_0(org.apache.jsp.message_jsp:334) at org.apache.jsp.message_jsp._jspx_meth_c_choose_0(org.apache.jsp.message_jsp:151) at org.apache.jsp.message_jsp._jspService(org.apache.jsp.message_jsp:92) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:373) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:477) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:371) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:83) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.apache.activemq.web.SessionFilter.doFilter(SessionFilter.java:46) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:81) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:118) at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:52) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:829) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:514) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) {code} Empty bundle.state file produces NPE If I have empty bundle.state file in Felix cache then exception is thrown while bundle start: java.lang.NullPointerException at org.apache.felix.framework.cache.BundleArchive.getPersistentState(BundleArchive.java:315) at org.apache.felix.framework.Felix.start(Felix.java:776) But if i delete bundle.state then no exception is thrown and org.apache.felix.framework.cache.BundleArchive#getPersistentState returns 'Installed' state. The exception is thrown because string comparsions in org.apache.felix.framework.cache.BundleArchive#getPersistentState do not respect java.io.BufferedReader#readLine's null return value if file is empty. Also there is a bug in org.apache.felix.framework.cache.BundleArchive#setPersistentState that can produce not persistent state of cache by creation of empty bundle.state file. JsonSerializer doesn't serialize dates as valid xs:date or xs:dateTime Parts of the model that represent dates use java.util.Date and are serialized by the JsonSerializer by simply calling toString() on the date. The result is that dates are serialized as invalid xs:date or xs:dateTime. One solution would be to replace the use of Date with something like the gdata java client's DateTime so that calling toString() would produce a proper xs:date or xs:dateTime. The job history display needs to be paged Currently the list of job history will try to render the entire list of jobs that have run. That doesn't scale up as more and more jobs run on a job tracker. Examples on Windows fail auth if run against Windows broker launched with defaults For example, <snip>\qpid\cpp\examples\qmf-console>printevents.exe 2009-mar-13 10:54:47 warning Broker closed connection: 320, connection-forced: Unsupported mechanism Tracing from the Windows broker: <snip> 2009-mar-13 11:13:47 info No message store configured, persistence is disabled. 2009-mar-13 11:13:47 info SASL enabled 2009-mar-13 11:13:47 notice Listening on TCP port 5672 2009-mar-13 11:13:47 notice Broker running 2009-mar-13 11:13:59 debug RECV [127.0.0.1:4094] INIT(0-10) 2009-mar-13 11:13:59 info SASL: Mechanism list: ANONYMOUS PLAIN 2009-mar-13 11:13:59 trace SENT 127.0.0.1:4094 INIT(0-10) 2009-mar-13 11:13:59 trace SENT [127.0.0.1:4094]: Frame[BEbe; channel=0; {ConnectionStartBody: server-properties={qpid.federation_tag:V2:36:str16(3cb5c19b-865d-4ebf-9d1a-9d7fb862ac35)}; mechanisms=str16{V2:9:str16(ANONYMOUS), V2:5:str16(PLAIN)}; locales=str16{V2:5:str16(en_US)}; }] 2009-mar-13 11:13:59 trace RECV [127.0.0.1:4094]: Frame[BEbe; channel=0; {ConnectionStartOkBody: client-properties={qpid.client_pid:F4:int32(672),qpid.client_ppid:F4:int32(0),qpid.client_process:V2:0:str16(),qpid.session_flow:F4:int32(1)};mechanism=; response=xxxxxx; locale=en_US; }] 2009-mar-13 11:13:59 info SASL: Starting authentication with mechanism: 2009-mar-13 11:13:59 debug Exception constructed: Unsupported mechanism 2009-mar-13 11:13:59 trace SENT [127.0.0.1:4094]: Frame[BEbe; channel=0; {ConnectionCloseBody: reply-code=320; reply-text=connection-forced: Unsupported mechanism; }] 2009-mar-13 11:13:59 debug DISCONNECTED [127.0.0.1:4094] 2009-mar-13 11:13:59 info Delete AsynchIO queued; ops in progress Workaround is to launch broker with "--auth no" TestMissingBlocksAlert fails on 0.20. TestMissingBlocksAlert fetches NameNode front page to verify that a an expected warning exists on the page. The namenode here is part of a MiniDFSCluster and looks like JspHelper is not initialized properly when JVM has both a datanode and a NameNode. Trunk is not affected. Lifecycle issues when using OSGi package service assemblies See the following exception for example: {code} javax.jbi.JBIException: SU has not been correctly deployed: {http://servicemix.apache.org/examples/camel}service:endpoint at org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl.checkComponentsStarted(ServiceAssemblyImpl.java:250) at org.apache.servicemix.jbi.deployer.artifacts.ServiceAssemblyImpl.init(ServiceAssemblyImpl.java:131) at org.apache.servicemix.jbi.deployer.impl.Deployer.registerServiceAssembly(Deployer.java:452) at org.apache.servicemix.jbi.deployer.impl.Deployer.registerDeployedServiceAssembly(Deployer.java:682) at org.apache.servicemix.jbi.deployer.impl.Deployer$2.addingService(Deployer.java:256) at org.osgi.util.tracker.ServiceTracker$Tracked.trackAdding(ServiceTracker.java:1030) at org.osgi.util.tracker.ServiceTracker$Tracked.track(ServiceTracker.java:1008) at org.osgi.util.tracker.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:933) at org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:846) at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:704) at org.apache.felix.framework.util.EventDispatcher.fireServiceEvent(EventDispatcher.java:635) at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:3393) at org.apache.felix.framework.Felix.access$000(Felix.java:39) at org.apache.felix.framework.Felix$1.serviceChanged(Felix.java:622) at org.apache.felix.framework.ServiceRegistry.fireServiceChanged(ServiceRegistry.java:576) at org.apache.felix.framework.ServiceRegistry.registerService(ServiceRegistry.java:86) at org.apache.felix.framework.Felix.registerService(Felix.java:2527) at org.apache.felix.framework.BundleContextImpl.registerService(BundleContextImpl.java:252) at org.apache.felix.framework.BundleContextImpl.registerService(BundleContextImpl.java:230) at org.apache.servicemix.common.osgi.EndpointExporter.afterPropertiesSet(EndpointExporter.java:96) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1369) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1335) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:68) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:343) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:308) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:138) at java.lang.Thread.run(Thread.java:613) {code} java.lang.IllegalStateException: Can't overwrite cause java.lang.IllegalStateException: Can't overwrite cause at java.lang.Throwable.initCause(Throwable.java:320) at org.apache.james.imap.mailbox.MailboxException.<init>(MailboxException.java:43) at org.apache.james.imap.mailbox.MailboxException.<init>(MailboxException.java:29) at org.apache.james.mailboxmanager.torque.TorqueMailbox.recent(TorqueMailbox.java:388) IO exception while executing hadoop fs -touchz fileName Stack trace while executing hadoop fs -touchz command . [user@xyzhostname ~]$ hadoop fs -touchz test/new0LenFile2 09/03/05 23:31:21 WARN hdfs.DFSClient: Problem renewing lease for DFSClient_-661919204 java.io.IOException: Call to xxxxx-xxx.xxx.com/xxx.xxx.xxx.xxx:xxxx failed on local exception: java.nio.channels.ClosedByInterruptException Bad artifact type is resolved (source is used instead of jar) i have the following dependency in my ivy.xml: <dependency name="jaxws-tools" org="com.sun.xml.ws" rev="2.1.4" /> the ivy retrive reports a conflict on it, and decide to jaxws-tools-2.1.4-sources.jar instead of jaxws-tools-2.1.4.jar [ivy:retrieve] downloading http://repo1.maven.org/maven2/com/sun/xml/ws/jaxws-tools/2.1.4/jaxws-tools-2.1.4-sources.jar ... [ivy:retrieve] ............... (512kB) [ivy:retrieve] .. (0kB) [ivy:retrieve] [SUCCESSFUL ] com.sun.xml.ws#jaxws-tools;2.1.4!jaxws-tools.jar(source) (2337ms) [ivy:retrieve] downloading http://repo1.maven.org/maven2/com/sun/xml/ws/jaxws-tools/2.1.4/jaxws-tools-2.1.4.jar ... [ivy:retrieve] .............................................................................................................................................................................................(498kB) [ivy:retrieve] .. (0kB) [ivy:retrieve] [SUCCESSFUL ] com.sun.xml.ws#jaxws-tools;2.1.4!jaxws-tools.jar (1534ms) [ivy:retrieve] :: resolution report :: resolve 5486ms :: artifacts dl 3899ms [ivy:retrieve] :: evicted modules: [ivy:retrieve] org.jvnet.staxex#stax-ex;RELEASE by [org.jvnet.staxex#stax-ex;1.2] in [default] [ivy:retrieve] javax.xml.stream#stax-api;1.0 by [javax.xml.stream#stax-api;1.0-2] in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 19 | 1 | 1 | 2 || 18 | 2 | --------------------------------------------------------------------- [ivy:retrieve] :: retrieving :: aarf-ndc#zeppelin-ext-ws [ivy:retrieve] confs: [default] [ivy:retrieve] conflict on /Users/lalyos/prj/zeppelin/trunk/zeppelin-ext-ws/lib/default/jaxws-tools.jar in [default]: 2.1.4 won Add Apache License to EditLogBackupOutputStream Apache License is missing in EditLogBackupOutputStream.java 'testRunTimeStatistics(org.apache.derbyTesting.functionTests.tests.lang.OffsetFetchNextTest)junit.framework.AssertionFailedError' on Windows See e.g. http://dbtg.thresher.com/derby/test/Daily/jvm1.6/testing/Limited/testSummary-754693.html http://dbtg.thresher.com/derby/test/Daily/jvm1.5/testing/Limited/testSummary-754693.html http://dbtg.thresher.com/derby/test/Daily/jvm1.4/testing/Limited/testSummary-754693.html There were 2 failures: 1) testRunTimeStatistics(org.apache.derbyTesting.functionTests.tests.lang.OffsetFetchNextTest)junit.framework.AssertionFailedError at org.apache.derbyTesting.functionTests.tests.lang.OffsetFetchNextTest.testRunTimeStatistics(OffsetFetchNextTest.java:605) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:105) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) 2) testRunTimeStatistics(org.apache.derbyTesting.functionTests.tests.lang.OffsetFetchNextTest)junit.framework.AssertionFailedError at org.apache.derbyTesting.functionTests.tests.lang.OffsetFetchNextTest.testRunTimeStatistics(OffsetFetchNextTest.java:605) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at org.apache.derbyTesting.junit.BaseTestCase.runBare(BaseTestCase.java:105) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) at org.apache.derbyTesting.junit.BaseTestSetup.run(BaseTestSetup.java:57) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:21) at junit.extensions.TestSetup.run(TestSetup.java:25) testCliDriver_union3 broken HIVE-308 seems to haven broken the nightly build, see test TestCliDriver.testCliDriver_union3. http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.17/26/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_union3/ Issuing queries with COUNT(DISTINCT) on a column that may contain null values hits a NPE When issuing a query that may contain a null value, I get a NPE. E.g. if 'middle_name' potentially holds null values, select count(distinct middle_name) from people; will fail with the below exception. Other queries that work with the same input set: select distinct middle_name from people; select count(1), middle_name from people group by middle_name; java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:169) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:318) at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.GroupByOperator.process(GroupByOperator.java:424) at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:164) ... 2 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:376) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:477) at org.apache.hadoop.hive.ql.exec.GroupByOperator.process(GroupByOperator.java:420) ... 3 more puttin in overview.html file to a more appropriate locaiton overview.html file will now get into src/overview.html as in hadoop. Error with the OSGI bundle cglib version n created (2.1.0.3 instead of 2.1.3) The bundle org\apache\servicemix\bundles\org.apache.servicemix.bundles.cglib\2.1_3_2 is a wrapper of the cglib.jar version 2.1.3 but in the MANIFEST file, the packages are exported with the version : 2.1.0.3 Manifest-Version: 1.0 Built-By: 002597273 Created-By: Apache Maven Bundle Plugin Bundle-License: http://www.apache.org/licenses/LICENSE-2.0.txt Import-Package: net.sf.cglib.asm;version="2.1.0.3",net.sf.cglib.asm.at trs;version="2.1.0.3",net.sf.cglib.asm.util;resolution:=optional,net. sf.cglib.beans;version="2.1.0.3",net.sf.cglib.core;version="2.1.0.3", net.sf.cglib.proxy;version="2.1.0.3",net.sf.cglib.reflect;version="2. 1.0.3",net.sf.cglib.transform;version="2.1.0.3",net.sf.cglib.transfor m.hook;version="2.1.0.3",net.sf.cglib.transform.impl;version="2.1.0.3 ",net.sf.cglib.util;version="2.1.0.3",org.apache.tools.ant;resolution :=optional,org.apache.tools.ant.types;resolution:=optional,org.codeha us.aspectwerkz.hook;resolution:=optional Bnd-LastModified: 1236863035811 Export-Package: net.sf.cglib.reflect;uses:="net.sf.cglib.core,net.sf.c glib.asm";version="2.1.0.3",net.sf.cglib.core;uses:="net.sf.cglib.asm ,net.sf.cglib.asm.util";version="2.1.0.3",net.sf.cglib.beans;uses:="n et.sf.cglib.core,net.sf.cglib.asm";version="2.1.0.3",net.sf.cglib.tra nsform.impl;uses:="net.sf.cglib.core,net.sf.cglib.transform,net.sf.cg lib.asm";version="2.1.0.3",net.sf.cglib.transform;uses:="net.sf.cglib .core,org.apache.tools.ant.types,net.sf.cglib.asm,org.apache.tools.an t,net.sf.cglib.asm.attrs";version="2.1.0.3",net.sf.cglib.asm;version= "2.1.0.3",net.sf.cglib.proxy;uses:="net.sf.cglib.reflect,net.sf.cglib .core,net.sf.cglib.asm";version="2.1.0.3",net.sf.cglib.asm.attrs;uses :="net.sf.cglib.asm";version="2.1.0.3",net.sf.cglib.util;uses:="net.s f.cglib.core,net.sf.cglib.asm";version="2.1.0.3",net.sf.cglib.transfo rm.hook;uses:="net.sf.cglib.core,net.sf.cglib.asm,net.sf.cglib.transf orm,org.codehaus.aspectwerkz.hook";version="2.1.0.3" Bundle-Version: 2.1.0.3_2-SNAPSHOT Bundle-Name: Apache ServiceMix Bundles: cglib-2.1_3 Bundle-Description: This bundle simply wraps cglib-2.1_3.jar. Build-Jdk: 1.5.0_16 Bundle-DocURL: http://www.apache.org/ Bundle-ManifestVersion: 2 Bundle-Vendor: Apache Software Foundation Bundle-SymbolicName: org.apache.servicemix.bundles.cglib CayenneRuntimeException in modeler due to ClassNotFoundException when java type is invalid and db attribute is null. To duplicate: 1. Open an existing project with the Cayenne Modeler. 2. Select an existing attribute on an existing object entity. Delete the Java Type field. 3. Navigate to some other object entity. 4. Go back to the original one and click the "attributes" tab. You will see that the GUI doesn't get updated when you switch to that tab. It looks like the modeler is calling Class.forName( "" ). I have managed to work around this by manually editing the map.xml file and filling in a valid java classname for the attribute in question. The following stacktrace is printed to the console: Exception in thread "AWT-EventQueue-0" org.apache.cayenne.CayenneRuntimeException: [v.3.0M5 Dec 09 2008 00:19:19] Failed to load class for name '': at org.apache.cayenne.map.ObjAttribute.getJavaClass(ObjAttribute.java:79) at org.apache.cayenne.modeler.editor.ObjAttributeTableModel.getValueAt(ObjAttributeTableModel.java:174) at javax.swing.JTable.getValueAt(JTable.java:2695) at javax.swing.JTable.prepareRenderer(JTable.java:5712) at javax.swing.plaf.basic.BasicTableUI.paintCell(BasicTableUI.java:2075) at javax.swing.plaf.basic.BasicTableUI.paintCells(BasicTableUI.java:1977) at javax.swing.plaf.basic.BasicTableUI.paint(BasicTableUI.java:1773) at javax.swing.plaf.ComponentUI.update(ComponentUI.java:143) at javax.swing.JComponent.paintComponent(JComponent.java:763) at javax.swing.JComponent.paint(JComponent.java:1027) at javax.swing.JComponent.paintChildren(JComponent.java:864) at javax.swing.JComponent.paint(JComponent.java:1036) at javax.swing.JViewport.paint(JViewport.java:747) at javax.swing.JComponent.paintChildren(JComponent.java:864) at javax.swing.JComponent.paint(JComponent.java:1036) at javax.swing.JComponent.paintToOffscreen(JComponent.java:5122) at javax.swing.BufferStrategyPaintManager.paint(BufferStrategyPaintManager.java:277) at javax.swing.RepaintManager.paint(RepaintManager.java:1217) at javax.swing.JComponent._paintImmediately(JComponent.java:5070) at javax.swing.JComponent.paintImmediately(JComponent.java:4880) at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:803) at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:714) at javax.swing.RepaintManager.seqPaintDirtyRegions(RepaintManager.java:694) at javax.swing.SystemEventQueueUtilities$ComponentWorkRequest.run(SystemEventQueueUtilities.java:128) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: java.lang.ClassNotFoundException: at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.apache.cayenne.util.Util.getJavaClass(Util.java:588) at org.apache.cayenne.map.ObjAttribute.getJavaClass(ObjAttribute.java:76) ... 31 more JMXAgent - If no privileges then lifecycle is null If you run Camel on OC4j then it will by default not grant access to its JMX MbeanServer. So if you use jmxAgent in Camel spring configuration it is setup but Camel cannot access it and it causes a NPE when you do endpoint lookup later. Small typos in ecommerce & webpos components Small typos in ecommerce & webpos components creating error. Duplicate data from decoder I got duplicate messages in messageReceived(...) in my io handler adapter, but there are no such problem while decoding message (each message decoded only once). Rolling back to M3 seems solved problem. Note. SslFilter is used. org.apache.derby.impl.load.Import needs to escape single quotes The code that builds the SQL statement that invokes the Import VTI doesn't properly escape single quotes. This causes problems for users, see: http://mail-archives.apache.org/mod_mbox/db-derby-user/200901.mbox/%3c21754463.post@talk.nabble.com%3e Import.performImport() is the method that needs to be fixed. CLONE -WSDL2Java: minOccurs and maxOccurs in <sequence>/<choice> are not respected. The following is a valid wsdl code: <xsd:element name="ClaimMultipleElementsResult" type="tns:ClaimMultipleElementsResultType"/> <xsd:complexType name="ClaimMultipleElementsResultType"> <xsd:sequence maxOccurs="unbounded" minOccurs="0"> <xsd:element name="Element" type="xsd:hexBinary" maxOccurs="1" minOccurs="1"/> <xsd:element name="ElementId" type="xsd:int" maxOccurs="1" minOccurs="1"/> </xsd:sequence> </xsd:complexType> This means, the ClaimMultipleElementsResultType can contain zero or more pairs of Element / ElementId. However, the stub class ClaimMultipleElementsResultType created by WSDL2Java provides only access for *one* pair of Element / ElementId. I'm not sure whether this is a duplicate of issue AXIS2-840. TarArchiveEntry(File) now crashes on file system roots The TarArchiveEntry(File) constructor now crashes if the File argument is a file system root. For example, on my windows box, I want to backup the entire contents of my F drive, so I am supplying a File argument that is constructed as new File("F:\\") That particular file causes the TarArchiveEntry(File) constructor to fail as follows: Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: -1 at java.lang.StringBuffer.charAt(StringBuffer.java:162) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.<init>(TarArchiveEntry.java:245) Looking at the code (I downloaded revision 743098 yesterday), it is easy to see why this occured: 1) the if (osname != null) { logic will strip the "F:" from my path name of "F:\", leaving just the "\" 2) that "\" will then be turned into a single "/" by the fileName = fileName.replace(File.separatorChar, '/'); line 3) that single "/" will then be removed by the while (fileName.startsWith("/")) { logic, leaving the empty string "". 4) then line #245 if (this.name.charAt(this.name.length() - 1) != '/') { must crash, because it falsely assumes that fileName has content. THIS IS A SHOW STOPPER BUG FOR ME. I am not sure when this current behavior of TarArchiveEntry was introduced; a very old codebase (from 2+ years ago) of compress that I used to use handled file system roots just fine. There are many ways to fix this. For instance, if it is, in fact, OK for the name field to be empty, then you can simply put a check on line #245 as follows: if ( (name.length() > 0) && (name.charAt(name.length() - 1) != '/') ) { (NOTE on coding style: do you really need to use "this." in the constructor when there is no possible ambiguity? Makes your code wordier and therefore harder to read.) My guess, not knowing your full codebase well, is that it is NOT OK for name to be blank. For example, you seem to want directories to end with a '/' char, and file ssystem roots are always directories. Therefore, you have some decisions to make: a) is it OK for the name field to simply be "/" in the case of file system roots? b) if a) is not good for some reason, then you must introduce an artificial root name, so that name takes on a value like "filesystemRoot/" or "filesystemRoot_F/" or whatever. This bug, by the way, brings up another issue: there currently are no javadocs regarding field contracts. Every field's javadocs needs its constraints to be specified as a contract, for example, /** * The entry's name. * <p> * Contract: is never null (and never empty?). * Contains (only ASCII chars? any Unicode chars?). * Must be (<= 100 chars? unlimited number of chars?). * If {@link #file} is a directory, then must end in a '/' char. * etc... */ private StringBuffer name; JcrModifiablePropertyMap remove method doesn't remove Call to this Map remove method does not remove from the map, nor are the removals persisted on call to its save method. I guess this is to to a mistake in the method at ln 108. final Object oldValue = this.cache.get(key); Here the read vealues are just accesses but not removed. "HTML form buttons HOWTO" tutorial in Cookbook does no longer work In the "HTML form buttons HOWTO" tutorial, it is said that "When a button is pressed, a parameter is set in the framework with the name and value that are specified as the name and value attributes of your HTML button. The framework converts this automatically to boolean value if an appropriate property of the Action is found." In Struts 2.0.11, this is not the case. The framework only converts to "true" if the parameter is set to the String "true". So the only way to get the example of the tutorial work is to define the input tags like this <input type="submit" name="buttonOnePressed" value="true"> <input type="submit" name="buttonTwoPressed" value="true"> This renders the example useless, as the value attribute defined the button text. Issue while persisting sharedLib attribute in multicore solr.xml I executed a core admin command to dynamically create a new core in Solr with the persist flag set to true as mentioned here: http://wiki.apache.org/solr/CoreAdmin#head-7ca1b98a9df8b8ca0dcfbfc49940ed5ac98c4a08. The core properties like name and instanceDir were persisted properly in the solr.xml. However, the relative path specified in the sharedLib attribute of the top level "solr" element got converted by its absolute path. This caused errors in loading the classes in the sharedLib when the server is subsequently restarted. Manually changing the sharedLib back to its relative path fixes this issue. quickstart archetype adds invalid maven-compiler-plugin configuration the maven-compiler-plugin configuration created by the quickstart archetype contains an invalid tag: <optimise> this should be <optimize> <plugin> <artifactId>maven-compiler-plugin</artifactId> <inherited>true</inherited> <configuration> <source>1.5</source> <target>1.5</target> *<optimise>true</optimise>* <debug>true</debug> </configuration> </plugin> With Oracle, OpenJPA allows setting non-nullable field to null An entity has a field defined as follows: @Column(nullable=false) private Object nonNullableObject; Using Oracle, it is possible to set the value of this column to null. OpenJPA will not complain, but will instead store whatever is returned by oracle.sql.BLOB.empty_lob(). An exception should be thrown instead, because the field has been defined as non-nullable. Adding a new node to a cluster node that has recovered messages from disk fails. Instead you get something like: 2009-mar-16 14:28:41 error Connection exception: framing-error: Unexpected command start frame. (qpid/SessionState.cpp:57) EntityOperator.IN will crash on some databases with empty list If you use the following entity exr, new EntityExpr("orderId", EntityOperator.IN, new ArrayList()); It will crash on at least Derby. The reason is that this condition evaluates to the keyword FALSE, which apparently is not supported on Derby. The problem code is in EntityComparisonOperator.java: // if this is an IN operator and the rhs Object isEmpty, add "FALSE" instead of the normal SQL if (this.idInt == EntityOperator.ID_IN && UtilValidate.isEmpty(rhs)) { sql.append("FALSE"); return; } Perhaps this is over engineered? What happens if we just leave this out and let it generate an "IN ()" SQL? GzipCodec fails second time it is used in a process The attached code (GZt.java) raises: {noformat} java.io.IOException: incorrect header check at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method) at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221) at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:80) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:62) at java.io.DataInputStream.readByte(DataInputStream.java:248) at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:325) at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:346) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1853) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1876) at org.apache.hadoop.io.MapFile$Reader.readIndex(MapFile.java:319) at org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:435) at org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:417) at org.apache.hadoop.io.MapFile$Reader.seek(MapFile.java:404) at org.apache.hadoop.io.MapFile$Reader.get(MapFile.java:523) {noformat} Balancer throws "Not a host:port pair" unless port is specified in fs.default.name If fs.default.name is specified as only a hostname (with no port), balancer throws "Not a host:port pair" and will not run. Workaround is to add default port 8020 to fs.default.name. According to Doug: "That's the work-around, but it's a bug. One should not need to specify the default port number (8020)." According to Raghu Angadi: "Balancer should use NameNode.getAddress(conf) to get NameNode address." See http://www.nabble.com/Not-a-host%3Aport-pair-when-running-balancer-td22459259.html for discussion. aggregation on empty table should still return 1 row The query "SELECT COUNT(1) FROM f_status_update fsu WHERE FALSE" should return a single row with value 0. Our code treat that query as "SELECT 1, COUNT(1) FROM f_status_update fsu WHERE FALSE GROUP BY 1", but these 2 queries are not equivalent because the second query will return empty result if the input is empty. Update the "homepage" URL to refer to the new org.oasisopen URL Currently various files refer to the"homepage" for SCA as: http://www.osoa.org/display/Main/Service+Component+Architecture+Specifications This should be updated to refer to the equivalent OASIS Open SCA-J page NameNode should not send empty block replication request to DataNode On our production clusters, we occasionally see that NameNode sends an empty block replication request to DataNode on every heartbeat, thus blocking this DataNode from replicating or deleting any block. This is partly caused by DataNode sending a wrong number of replications in progress which will be fixed by HADOOP-5465. There is also a flaw at the NameNode side. NameNode should not interpret the number of replications in progress as the number of targets since replication is done through a pipeline. It also should make sure that no empty replication request is sent to DataNode. JsonSerializer does not handle POJOs in JSONArrays correctly JSONSerializer just toString()'s JSONArrays, which doesn't handle embedded POJOs correctly. SchemaToolTask does not have "dropTables" argument The task SchemaToolTask does not implements "dropTables" argument. According to the documentation [1] this task can take "dropTables" arguments but when I run as shown below I get this error: The <schematool> type doesn't support the "droptables" attribute. <schematool dropTables="false" action="refresh"> <fileset dir="${build.sql.dir}"> <include name="schema.xml" /> </fileset> <config propertiesFile="${prototype.src.model.base}/META-INF/persistence.xml" /> </schematool> In fact, I looked in source code and this task has no set method for "dropTables"argument. I use : revision.number=422266:683325 openjpa.version=1.2.0 [1] http://openjpa.apache.org/builds/1.2.0/apache-openjpa-1.2.0/docs/manual/ref_guide_schema_schematool.html Thanks, Failed object creation may result in invalid active count in GKOP In GenericKeyedObjectPool.borrowObject() there are two instances of: try { _factory.destroyObject(key,pair.value); synchronized (this) { pool.decrementActiveCount(); } } catch (Exception e2) { // swallowed } The decrementing of the active count should be moved to a finally block to ensure that it is always called even if destroyObject() throws an exception. I think this is what DCP-34 is getting at. NullPointerException during AJAX form submit To reproduce this bug: * Checkout the Ars Machina Example Project from SVN: https://ars-machina.svn.sourceforge.net/svnroot/ars-machina/example/branches/1.1 * Run its Main class (default package) * Click in the login link * Login as manager (login) manager (password) * Click in the project listings link * Click in one of the edition links (the second icon) * Click the submit button. Relevant part of the stack trace: java.lang.NullPointerException at org.apache.tapestry5.dom.Element.createNamespaceURIToPrefix(Element.java:676) at org.apache.tapestry5.dom.Element.toMarkup(Element.java:333) at org.apache.tapestry5.dom.Element.writeChildMarkup(Element.java:870) at org.apache.tapestry5.dom.Element.toMarkup(Element.java:386) at org.apache.tapestry5.dom.Element.writeChildMarkup(Element.java:870) at org.apache.tapestry5.dom.Element.toMarkup(Element.java:386) at org.apache.tapestry5.dom.Element.writeChildMarkup(Element.java:870) at org.apache.tapestry5.dom.Element.getChildMarkup(Element.java:883) at org.apache.tapestry5.internal.services.PageRenderQueueImpl.renderPartial(PageRenderQueueImpl.java:163) HTML compiler infinite loop When I attempt to run the HTML documentation generator on the following .thrift file, the generator writes 48 lines of legitimate HTML and then begins to spin writing nothing but newline characters to the generated HTML file. It will continue to do this until the process is killed. Contents of .thrift file: php_namespace api namespace as3 com.amiestreet.api enum AmieApiErrorCode { API_EC_UNKNOWN = 0 API_EC_METHOD = 1 API_EC_BADFINGERPRINT = 2 API_EC_ARTIST_ALREADY_EXISTS = 3 API_EC_USER_NOT_FOUND = 4 API_EC_SONG_NOT_FOUND = 5 API_EC_ALBUM_NOT_FOUND = 6 API_EC_ARTIST_NOT_FOUND = 7 API_EC_PLAYLIST_NOT_FOUND = 8 API_EC_ALBUM_ALREADY_EXISTS = 9 API_EC_SONG_ALREADY_EXISTS = 10 API_EC_LOGIN_REQUIRED = 11 API_EC_REC_NOT_FOUND = 12 API_EC_SEARCH_ERROR = 13 API_EC_SEARCH_FIND_MISMATCH = 14 API_EC_SEARCH_FIND_ARTIST_REQUIRED = 15 } typedef string errorMessage typedef string userFingerprint /** * The id, name or dasherized name of an artist. * Examples: * - The Walkmen * - the-walkmen * - 6KNMnd6NoOUx */ typedef string artistIdentifierRef Job with output hdfs:/user/<username>/outputpath (no authority) fails with Wrong FS Using namenode with default port of 8020. When starting a job with output hdfs:/user/knoguchi/outputpath, my job fails with Wrong FS: hdfs:/user/knoguchi/outputpath, expected: hdfs://aaa.bbb.cc C++ broker crashes periodically when handling connection closure Periodically when running the .NET WCF client against the C++ broker running on windows, the broker crashes. This occurs every 10 runs or thereabouts. Logs and stacks attached. C++ Windows; IntegerTypes.h need not define size_t The qpid/cpp/src/qpid/sys/windows/IntegerTypes.h file typedefs size_t. This is not necessary for VC9, and the definition conflicts with the proper def for 64-bit builds. Removing the typedef for size_t works for both 32- and 64-bit builds. Failures in Transform don't stop the job If the program executed via a SELECT TRANSFORM() USING 'foo' exits with a non-zero exit status, Hive proceeds as if nothing bad happened. The main way that the user knows something bad has happened is if the user checks the logs (probably because he got no output). This is doubly bad if the program only fails part of the time (say, on certain inputs) since the job will still produce output and thus the problem will likely go undetected. Abstract Transport in Ruby #read method should throw NotImplementedException It's really an abstract method, so not overriding it is an error. We should throw an error when someone calls it inappropriately. The same can probably be said for #write. #open and #open? might also want to change, though that's not quite as certain. 'Schema <schemaname> does not exist' when constraint used in table definition https://issues.apache.org/jira/browse/DERBY-568#action_12524420 In the response to my original comment post, which you can find via the permalink above, I was encouraged to file this as a new issue. verified this back to 10.1.2.1 with the following ij script. connect 'jdbc:derby:wombat;create=true;user=blogs'; CREATE TABLE BLOGSCOM__BLOGS__USERS(PK INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,username VARCHAR(16) NOT NULL CONSTRAINT BLOGSCOM__BLOGS__USERS_UNIQUE_username UNIQUE CONSTRAINT BLOGSCOM__BLOGS__USERS_PASSWORD_username CHECK(LENGTH(username)>7),password VARCHAR (32672) NOT NULL , PRIMARY KEY(PK)); TarArchiveEntry uses bad size for directories in the File-arg constructor The File-arg constructor sets the size to File.length() which may be different from 0 for directories on Unix-like systems. This leads to archives with entries that claim to have a length of say 4096 bytes but in fact don't contain any data at all, rendering the created archive invalid. JMX instrumentation - will add DeadLetterChannel even if you have defined to use NoErrorHandler See nabble: http://www.nabble.com/StreamCaching-in-Camel-1.6-td22305654s22882.html I will add an unit test that demonstrates this: org.apache.camel.processor.ChoiceNoErrorHandlerTest The route should at all time *not* contain any error handler at all, regardless wether JMX is enabled or not. Standard modules names in 2.x modules 2.x code base should follow the below standard for module names.... Samples: Apache Tuscany SCA Sample <modulename> iTests: Apache Tuscany SCA iTest <modulename> Demos: Apache Tuscany SCA Demo <modulename> org.apache.servicemix.jdbc.JDBCAdapter : bug in Statements and add of new functionnality doLoadData There was a bug in the SQL statement of the method 'getFindAllDataStatement()' of the class Statements of the package 'org.apache.servicemix.jdbc' in the library 'servicemix-services'. I have also add a functionality to the class JDBCAdapter of the same package to retrieve all the data from the table. It's the method 'doLoadData' with only the connection as argument and which returns a map of string and byte array. You will find the patches for the three classes of the package in attachment. Can you tell me if you will include these in the next releases ? Thanks in advance, Anne Noseda. Jcr-Server: BasicCredentialsProviderTest throws NPE if defaultAuthHeader init param misses the password issue reported by dominique jaeggi: a missing-auth-header init param that has the form "uid" instead of "uid:pw" or "uid:" results in NPE upon SimpleCredentials creation. ObjectMBean throws NullPointerException when accessing non-existing attributes Currently when someone tries to access an attribute via JMX that doesn't exist, the ObjectMBean throws a NullPointerException. It would be better to precheck if the attribute exists and throw an AttributeNotFoundException with a short info, like "Attribute <XYZ> doesn't not exist". Stacktrace: java.lang.NullPointerException at org.apache.mina.integration.jmx.ObjectMBean.getAttribute(ObjectMBean.java:168) at MyTestClass.main(MyTestClass.java:12) Exception in thread "main" javax.management.MBeanException at org.apache.mina.integration.jmx.ObjectMBean.throwMBeanException(ObjectMBean.java:849) at org.apache.mina.integration.jmx.ObjectMBean.getAttribute(ObjectMBean.java:173) at MyTestClass.main(MyTestClass.java:12) Caused by: java.lang.NullPointerException at org.apache.mina.integration.jmx.ObjectMBean.getAttribute(ObjectMBean.java:168) ... 1 more problems with relationships when using nested contexts and ROP The problem reveals itself in several ways. I'll attach a junit case to show one of the problems. BYE Illegal Tag does not close connection Whenever a BYE is issued, the connection should be cut and the server recycled. Bundle cache is not cleared when *BundlePersistenceManager is closed Close method of persistence managers is responsible for releasing all acquired resources. In case of BundlePersistenceManager it should also free memory by clearing the bundle cache. TaskMemoryManagerThread crashes in a corner case TT's stdout says. {code} Exception in thread "org.apache.hadoop.mapred.TaskMemoryManagerThread" java.lang.NullPointerException at org.apache.hadoop.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:126) at org.apache.hadoop.mapred.TaskMemoryManagerThread.run(TaskMemoryManagerThread.java:200) {code} TaskMemoryManager crashes and no further memory management is done. @Replicated is only recognized for root instance but not the instanes reachable from the root during persist() operation. The root instance is replicated across multiple slices. But the closure of the root is assigned to the first slice only even when the instances of closure are @Replicated themselves. Corba port 1050 is not released after stopping j2ee-corba-yoko configuration After stopping the corba component in the console, port 1050 is not released. Use command netstat or connect it with Eclipse Debug. Those two threads are still running. Yoko:Server:StartedThread. It seems it is still blocked on ServerSocket.accept method. And I could not start the Corba service in the admin console due to address already in use. Thanks for any comment ! java.lang.IllegalStateException: BeanFactory not initialized or already closed - call 'refresh' before accessing beans via the ApplicationContext Hi, When I run the following spring DSL in SMX4 <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xmlns:cxf="http://camel.apache.org/schema/cxf" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://camel.apache.org/schema/osgi http://camel.apache.org/schema/osgi/camel-osgi.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd"> <bean id="bindyDataformat" class="org.apache.camel.dataformat.bindy.csv.BindyCsvDataFormat"> <constructor-arg type="java.lang.String" value="org.apache.camel.example.reportincident.model" /> </bean> <bean id="csv" class="org.apache.camel.example.reportincident.csv.CsvBean" /> <bean id="OK" class="org.apache.camel.example.reportincident.OutputReportIncident"> <property name="code" value="0"/> </bean> <camelContext trace="true" xmlns="http://camel.apache.org/schema/osgi"> <camel:package>org.apache.camel.example.reportincident.routing</camel:package> <!-- File route --> <camel:route> <camel:from uri="file://d:/temp/data/?moveExpression=d:/temp/done/${file:name}" /> <camel:unmarshal ref="bindyDataformat" /> <camel:to uri="bean:csv" /> </camel:route> <!-- CXF route --> <camel:route> <camel:from uri="cxf://http://localhost:8080/camel-example/incident?serviceClass=org.apache.camel.example.reportincident.service.ReportIncidentEndpoint&amp;wsdlURL=wsdl/report_incident.wsdl" /> <camel:convertBodyTo type="org.apache.camel.example.reportincident.InputReportIncident" /> <camel:to uri="log:cxf" /> <camel:transform> <camel:method bean="OK" method="code"/> </camel:transform> </camel:route> </camelContext> </beans> , I receive the following error : 15:48:59,209 | ERROR | xtenderThread-15 | OsgiBundleXmlApplicationContext | gatedExecutionApplicationContext 366 | Post refresh error java.lang.IllegalStateException: BeanFactory not initialized or already closed - call 'refresh' before accessing beans via the ApplicationContext at org.springframework.context.support.AbstractRefreshableApplicationContext.getBeanFactory(AbstractRefreshableApplicationContext.java:153) at org.springframework.context.support.AbstractApplicationContext.containsBean(AbstractApplicationContext.java:892) at org.apache.cxf.configuration.spring.ConfigurerImpl.configureBean(ConfigurerImpl.java:141) at org.apache.cxf.configuration.spring.ConfigurerImpl.configureBean(ConfigurerImpl.java:111) at org.apache.cxf.transport.http.AbstractHTTPTransportFactory.configure(AbstractHTTPTransportFactory.java:229) at org.apache.cxf.transport.http.AbstractHTTPTransportFactory.configure(AbstractHTTPTransportFactory.java:224) at org.apache.cxf.transport.http_jetty.JettyHTTPTransportFactory.createDestination(JettyHTTPTransportFactory.java:121) at org.apache.cxf.transport.http_jetty.JettyHTTPTransportFactory.getDestination(JettyHTTPTransportFactory.java:103) at org.apache.cxf.endpoint.ServerImpl.initDestination(ServerImpl.java:90) at org.apache.cxf.endpoint.ServerImpl.<init>(ServerImpl.java:69) at org.apache.cxf.frontend.ServerFactoryBean.create(ServerFactoryBean.java:121) at org.apache.cxf.jaxws.JaxWsServerFactoryBean.create(JaxWsServerFactoryBean.java:168) at org.apache.camel.component.cxf.CxfConsumer.<init>(CxfConsumer.java:102) at org.apache.camel.component.cxf.CxfEndpoint.createConsumer(CxfEndpoint.java:95) at org.apache.camel.impl.EventDrivenConsumerRoute.addServices(EventDrivenConsumerRoute.java:62) at org.apache.camel.Route.getServicesForRoute(Route.java:74) at org.apache.camel.impl.RouteService.doStart(RouteService.java:77) at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:50) at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:743) at org.apache.camel.spring.SpringCamelContext.maybeDoStart(SpringCamelContext.java:165) at org.apache.camel.spring.SpringCamelContext.doStart(SpringCamelContext.java:160) at org.apache.camel.impl.ServiceSupport.start(ServiceSupport.java:50) at org.apache.camel.spring.SpringCamelContext.maybeStart(SpringCamelContext.java:95) at org.apache.camel.spring.SpringCamelContext.onApplicationEvent(SpringCamelContext.java:114) at org.springframework.context.event.SimpleApplicationEventMulticaster$1.run(SimpleApplicationEventMulticaster.java:78) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49) at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:76) at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:274) at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:736) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.finishRefresh(AbstractDelegatedExecutionApplicationContext.java:380) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:346) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:308) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:138) at java.lang.Thread.run(Thread.java:595) but the error is not generated when running outside of an OSGI server (SMX4) Exceptions in SoapExternalService under heavy load Doing some quick benchmarking with in-memory processes (so when we start having a lot of invocations happening in parrallel), I get quite of few NPEs and AxisFault in SoapExternalService.applySecuritySettings(). I've pasted the stacks below and attached the logs for more. 19:00:40,930 ERROR [org.apache.ode.jacob.vpu.JacobVPU] [ODEServer-3] Method "run" in class "org.apache.ode.bpel.runtime.INVOKE" threw an unexpected exception. java.lang.NullPointerException at org.apache.ode.axis2.SoapExternalService.applySecuritySettings(SoapExternalService.java:246) at org.apache.ode.axis2.SoapExternalService.getServiceClient(SoapExternalService.java:240) at org.apache.ode.axis2.SoapExternalService.invoke(SoapExternalService.java:130) at org.apache.ode.axis2.MessageExchangeContextImpl.invokePartner(MessageExchangeContextImpl.java:52) at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeContextImpl.java:769) at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:100) at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451) at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139) at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:858) at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:206) at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:220) at org.apache.ode.bpel.engine.BpelProcess.handleWorkEvent(BpelProcess.java:392) at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:396) at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:387) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:390) at org.apache.ode.scheduler.simple.SimpleScheduler$4$1.call(SimpleScheduler.java:384) at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:208) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:383) at org.apache.ode.scheduler.simple.SimpleScheduler$4.call(SimpleScheduler.java:380) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 19:01:09,945 ERROR [org.apache.ode.axis2.SoapExternalService] [ODEServer-42] Error sending message to Axis2 for ODE mex {PartnerRoleMex#4611686018427392460 [PID {http://example.com/SyncProcess/SyncProcess}SyncProcess-4] calling null.scheduleWorkOrder(...)} org.apache.axis2.AxisFault: The anonymous_service_99d2dffa-127f-4109-8b6e-d472e120899e-7 service, which is not valid, does not belong to the anonymous_service_99d2dffa-127f-4109-8b6e-d472e120899e-7 service group. at org.apache.axis2.context.ServiceGroupContext.getServiceContext(ServiceGroupContext.java:138) at org.apache.axis2.client.ServiceClient.setAxisService(ServiceClient.java:829) at org.apache.ode.axis2.SoapExternalService.getServiceClient(SoapExternalService.java:237) at org.apache.ode.axis2.SoapExternalService.invoke(SoapExternalService.java:130) at org.apache.ode.axis2.MessageExchangeContextImpl.invokePartner(MessageExchangeContextImpl.java:52) at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeContextImpl.java:769) at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:100) at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451) at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139) Index split deadlock After doing dome research on the mailing list, it appears that the index split deadlock is a known behaviour, so I will start by describing the theoretical problem first and then follow with the details of my test case. If you have concurrent select and insert transactions on the same table, the observed locking behaviour is as follows: - the select transaction acquires an S lock on the root block of the index and then waits for an S lock on some uncommitted row of the insert transaction - the insert transaction acquires X locks on the inserted records and if it needs to do an index split creates a sub-transaction that tries to acquire an X lock on the root block of the index In summary: INDEX LOCK followed by ROW LOCK + ROW LOCK followed by INDEX LOCK = deadlock In the case of my project this is an important issue (lack of concurrency after being forced to use table level locking) and I would like to contribute to the project and fix this issue (if possible). I was wondering if someone that knows the code can give me a few pointers on the implications of this issue: - Is this a limitation of the top-down algorithm used? - Would fixing it require to use a bottom up algorithm for better concurrency (which is certainly non trivial)? - Trying to break the circular locking above, I would first question why does the select transaction need to acquire (and hold) a lock on the root block of the index. Would it be possible to ensure the consistency of the select without locking the index? ----- The attached test (InsertSelectDeadlock.java) tries to simulate a typical data collection application, it consists of: - an insert thread that inserts records in batch - a select thread that 'processes' the records inserted by the other thread: 'select * from table where id > ?' The derby log provides detail about the deadlock trace and stacktraces_during_deadlock.txt shows that the inser thread is doing an index split. The test was run on 10.2.2.0 and 10.3.1.4 with identical behaviour. Thanks, Bogdan Calmac. regression in QuorumPeerMain, tickTime from config is lost, cannot start quorum ZOOKEEPER 330/336 caused a regression in QuorumPeerMain -- cannot reliably start a cluster due to missing tickTime. Improve calculation of refSize in ClassSize.java java/engine/org/apache/derby/iapi/services/cache/ClassSize.java has a static code block which calculates the size of a reference for the architecture. This code could be improved by adding garbage collection before measuring memory, to give a consistent reading. Also there have been suggestions that we use os.arch or sun.arch.data.model to make the measurement more reliable, especially on 64bit machines. Close private key stream after reading it MINA SSHD leaks the PEMReader's underlying FileInputStream when reading the private key. This unnecessarily consumes a file descriptor until the stream can (eventually) be GC'd by the JVM. CJKTokenizer convert HALFWIDTH_AND_FULLWIDTH_FORMS wrong CJKTokenizer have these lines.. if (ub == Character.UnicodeBlock.HALFWIDTH_AND_FULLWIDTH_FORMS) { /** convert HALFWIDTH_AND_FULLWIDTH_FORMS to BASIC_LATIN */ int i = (int) c; i = i - 65248; c = (char) i; } This is wrong. Some character in the block (e.g. U+ff68) have no BASIC_LATIN counterparts. Only 65281-65374 can be converted this way. The fix is if (ub == Character.UnicodeBlock.HALFWIDTH_AND_FULLWIDTH_FORMS && i <= 65474 && i> 65281) { /** convert HALFWIDTH_AND_FULLWIDTH_FORMS to BASIC_LATIN */ int i = (int) c; i = i - 65248; c = (char) i; } HiveServer can not define its port correctly HiveServer.java accept one argument stands for the port of this server, but I found that can not accept this argument. By digging into the source code, I found it may caused by these lines of main function. if (args.length > 1) { port = Integer.getInteger(args[0]); } I think they should be: if (args.length >= 1) { port = Integer.parseInt(args[0]); } The author may have some different intention, I think. UNION ALL does not allow different types in the same column {code} explain INSERT OVERWRITE TABLE t SELECT s.r, s.c, sum(s.v) FROM ( SELECT a.r AS r, a.c AS c, a.v AS v FROM t1 a UNION ALL SELECT b.r AS r, b.c AS c, 0 + b.v AS v FROM t2 b ) s GROUP BY s.r, s.c; {code} Both a and b have 3 string columns: r, c, and v. It compiled successfully but failed during runtime. "Explain" shows that the plan for the 2 union-all operands have different output types that are converged to STRING, but there is no UDFToString inserted for "0 + b.v AS v" and as a result, SerDe was failing because it expects a String but is passed a Double. Terminated exchanges are sometimes sent when handling an exception, leading to errors In the ConsumerProcessor, when the http response is formed and sent back to the requesting party, the following error occured: {code} SEVERE: Error processing exchange org.apache.servicemix.jbi.runtime.impl.InOutImpl@158aac4 java.lang.Exception: HTTP request has timed out at org.apache.servicemix.http.processors.ConsumerProcessor.process(ConsumerProcessor.java:96) at org.apache.servicemix.soap.SoapEndpoint.process(SoapEndpoint.java:368) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:621) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:592) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchangeInTx(AsyncBaseLifeCycle.java:477) at org.apache.servicemix.common.AsyncBaseLifeCycle$2.run(AsyncBaseLifeCycle.java:347) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Mar 3, 2009 4:27:27 PM org.apache.servicemix.common.AsyncBaseLifeCycle processExchangeInTx SEVERE: Error setting exchange status to ERROR javax.jbi.messaging.MessagingException: Can not send a terminated exchange at org.apache.servicemix.jbi.runtime.impl.MessageExchangeImpl.afterSend(MessageExchangeImpl.java:257) at org.apache.servicemix.jbi.runtime.impl.DeliveryChannelImpl.send(DeliveryChannelImpl.java:176) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchangeInTx(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.AsyncBaseLifeCycle$2.run(AsyncBaseLifeCycle.java:347) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) {code} In the finally block of the process(HttpServletRequest request, HttpServletResponse response) method {code} finally { exchange.setStatus(ExchangeStatus.DONE); channel.send(exchange); } {code} And that causes messaging exception in the MessageExchangeImpl {code} void afterSend() throws MessagingException { if (previousStatus == ExchangeStatus.DONE || previousStatus == ExchangeStatus.ERROR) { throw new MessagingException("Can not send a terminated exchange"); } } {code} Validate the artifact fields when adding an archive to the repository Validate the group, artifact, version and type field when adding an archive to the repository to prevent illegal characters that don't make sense in an artifact id. At the moment I'm planning to exclude ( ) < > , ; : \ / " ' and .. The new attribute values are overwrote while restarting the DB pool connector After editing the values in the db pool, then restart it in the console, the new values lost. When saving the new attribute values, such as user name, the gbeandata in the configurationManager is not updated, so while restarting the connector, the gbinstance copies those old values from it. So the new persistent values do not take effect, Do I miss anything, thanks for any comment ! Fail to create plugin via GEP Steps: 1.Define a G server runtime, and right-click to open it 2.Open "plugin" page, click "convert app to plugin"->choose any item you would like->next, all fields are empty, and fail to convert it. Can't install plugin via GEP Steps: 1.Extract attached sample plugin file 2.In eclipse, right-click g server , choose "open" and click "convert app to plugin", choose sample plugin location as local repository and click next, but no plugin is listed to install, so this also result in summary page can't display for server plugin manger But i can install it via admin console. absolute path for sharedLib is treated as a relative path http://www.nabble.com/Re%3A-Custom-path-for-solr-lib-and-data-folder-p22475244.html the build is broken I checked the trunk tonight and it does not build. I have not built it since a year, so I did a fresh install following the docs at http://incubator.apache.org/shindig/#tab-building. P@ ---- INFO] Compiling 118 source files to /Users/chanezon/code/shin2/java/common/target/classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure /Users/chanezon/code/shin2/java/common/src/main/java/org/apache/shindig/protocol/conversion/BeanJsonConverter.java:[105,26] type parameters of <T>T cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object /Users/chanezon/code/shin2/java/common/src/main/java/org/apache/shindig/protocol/conversion/BeanJsonConverter.java:[105,26] type parameters of <T>T cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.BuildFailureException: Compilation failure /Users/chanezon/code/shin2/java/common/src/main/java/org/apache/shindig/protocol/conversion/BeanJsonConverter.java:[105,26] type parameters of <T>T cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:580) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:500) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:479) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:331) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:292) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:301) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure /Users/chanezon/code/shin2/java/common/src/main/java/org/apache/shindig/protocol/conversion/BeanJsonConverter.java:[105,26] type parameters of <T>T cannot be determined; no unique maximal instance exists for type variable T with upper bounds T,java.lang.Object at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:516) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:453) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:559) ... 16 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 14 seconds [INFO] Finished at: Wed Mar 18 00:41:21 PDT 2009 [INFO] Final Memory: 27M/254M [INFO] ------------------------------------------------------------------------ rand() gets precomputated in compilation phase SELECT * FROM t WHERE rand() < 0.01; Hive will say: "No need to submit job", because the condition evaluates to false. The rand() function is special in the sense that every time it evaluates to a different value. We should disallow computing the value in the compiling phase. One way to do that is to add an annotation in the UDFRand and check that in the compiling phase. [Hive] wrong order in explain plan In case of multiple aggregations, the explain plan might be wrong -the order of aggregations since AbParseInfo maintains the information in a hashmap, which does the guarantee the results to be returned in order Update the rails database and file loader code from the PHP version The rails db loader code is still using 4x the scale and is missing the recent enhancements made by Akara to not run out of memory etc. Ditto for the file loader. This is a critical issue for first release. Fix path bug with manual installation The manual installation does not work in linux as a "\" is hardcoded to the path. A quick fix is to change the installation line in index.jsp to this: Install.install(request.getRealPath("WEB-INF/install") + "/", request.getParameter("rootPartition"), true); But, the code should be made to work regardless of OS. The new map/reduce api doesn't support combiners Currently, combiners are only called if they are defined using the old deprecated api. Modal dialogs hang IE7 if open in tabs is enabled IE seems to either hang or responses slowly if I choose to open popups in a new tab instead of new window. IE7 Settings : Tools -> Internet Options -> General (Tab) -> Tabs Settings -> select 'Always open popups in new Tab' The problem is related to the use of the setCapture method which not only captures events on the document body, but the entire browser UI A Connection close could cause an SSLIOSession to be incorrectly considered as closed in some environments This could be seen on the SunOS machine Hudson, and also if a breakpoint is placed at the line receiveEncryptedData() in SSLIOSession:isAppInputReady() to delay its execution slightly It seems like the following lines are the cause of the problem: int bytesRead = receiveEncryptedData(); if (bytesRead == -1) { this.status = CLOSED; } The Channel not having any more bytes does not indicate a close, since there still could be unencrypted data. Just removing the lines that mark the session as closed seems to fix the issue - but should be reviewed. refactor JBIConduitOutputStream exception handle currently it's {code} } catch (IOException e) { throw e; } catch (Exception e) { e.printStackTrace(); new IOException(e.toString()); } {code} should be {code} } catch (IOException e) { throw e; } catch (Exception e) { throw new RuntimeException(e.toString()); } {code} so that we can get correct exception back from JBI client side and remove noisy exception stacktrace Negative-id arguments and negative-id exceptions use the same counter If a function has negative (implicit) id fields in both its argument list and exception list, like {code} i32 ret_neg(i32 arg1, i32 arg2) throws (MixedEx1 ex1, MixedEx2 ex2); {code} then the same counter is used to assign field ids to both the arguments and exceptions. This is bad because the addition of a new argument will renumber the exceptions, breaking wire compatibility. The soon-to-be-uploaded patch fixes this be resetting the counter at the start of any field list. This will break wire-compatibility one time (and only with functions that have both negative-id arguments and negative id-exceptions, and only in the exceptional case), but will prevent future sudden breakages, so I think it is worth it. binary keys broken in trunk (again). Binary keys, specifically ones where the first byte of the key is nul '\0' don't work: - Splits happen - Logfile indicates everything normal But the .META. doesnt list all the regions. It only lists the 'basic' regions: 'table,,1234'. The other regions with the binary keys in the middle just dont seem to be in .META.... ZooKeeper config parsing can break HBase startup http://hudson.zones.apache.org/hudson/job/HBase-Patch/536/changes Noticed by Ryan: " [junit] 2009-03-19 01:56:17,491 ERROR [main] zookeeper.ZooKeeperWrapper(97): Failed to create ZooKeeper object: java.net.UnknownHostException: ${hbase.master.hostname}" Something is going wrong with the parsing/variable substitution of the new zoo.cfg. issues w/ StAX processing in JMSBindingProcessor I'm looking at JMSBindingProcessor.read(), and the way the read() method relies on itself to advance the cursor past its children's END_ELEMENT seems to be a problem. The code is: while (!endFound) { int fg = reader.next(); switch (fg) { case START_ELEMENT: String elementName = reader.getName().getLocalPart(); if ("destination".equals(elementName)) { parseDestination(reader, jmsBinding); } else if ("connectionFactory".equals(elementName)) { parseConnectionFactory(reader, jmsBinding); .... } else { .... } reader.next(); // PROBLEM! break; For child element 'SubscriptionHeaders', I could write that as either: <SubscriptionHeaders.../> OR <SubscriptionHeaders...>....</SubscriptionHeaders> OR <SubscriptionHeaders..... > </SubscriptionHeaders> The first two shouldn't be a problem, I start with a START_ELEMENT and then the next() back in read() advances me over the END_ELEMENT event. However, the third is a problem, since there is a CHARACTERS event in the middle, in which case the next() back in read() only takes me to the SubscriptionHeaders END_ELEMENT, though the code at this point can only deal with 'binding.jms' END_ELEMENT, (and since it doesn't get this, we get an error). In general, it seems that to deal with this kind of thing, helper methods that parse a child element should be responsible for themselves advancing the cursor to their own child element's END_ELEMENT. For what it's worth, I'm not sure how parseDestination() works either, it seems like parseDestinationProperties() is going to advance the cursor too far, but maybe I don't fully understand this scenario. If I can get my build working, I'll post a recreate test. DDMReader readBytes ArrayIndexOutOfBoundsException DDMReader.readBytes(int length) checks the length vs DssConstants.MAX_DSS_LENGTH, but ignores the fact that the buffer position "pos" might not be 0. If pos is non-zero then the pos + length can be larger than the size of "buffer" causing an ArrayIndexOutOfBoundsException. For me this happened when sending a BLOB that was 32766 bytes long. The value of pos was 2 in that method. SyncLogs thread in Child.java would update wrong file for a cleanup attempt, in some cases. This happens in the following scenario: If jvm is launched for a cleanup attempt and getTask has not returned yet. thus, isCleanup value is not obtained yet. But, the SyncLogs thread would do a syncLogs with wrong isCleanup value (i.e. with wrong index file). The bundle dom4j has a dependency to org.xmlpull.v1 which is not covered by existing bundles/specifications org.xmlpull.v1 has been marked as optional in the Import-Package statement <servicemix.osgi.export.pkg> org.dom4j*, org.xmlpull.v1;resolution:=optional </servicemix.osgi.export.pkg> of the pom.xml ReliabilityTest does not test lostTrackers, some times. ReliabilityTest does not lose trackers, if tasktracker pid could not be obtained. If command for the TaskTracker process is large, doing 'ps' and 'grep' does not return the pid of the process, thus it could not be suspended. Document job setup/cleaup tasks and task cleanup tasks in mapred tutorial Document the fact that job setup/cleanup and task cleanup happens as seperate tasks and they occupy map/reduce slots, in OutputCommitter section Miscellaneous binding.jms fixes I am attaching a patch with miscellaneous JMS fixes described below. JMSBindingProcessor - don't throw JMSBindingException for validation errors, since they are already reported via monitor interface - remove validationMessage member variable, which is not thread-safe JMSBindingContext - add methods to close request and response session, so that the session variable gets cleaned up JMSResourceFactory/JMSResourceFactoryImpl - add query method to find out whether connections must be closed after use, for environments where connections cannot be held open indefinitely (e.g. connection sharing environment) JMSBindingServiceBindingProvider - don't set a default destination if an activationSpec is used, since the activation spec provides a destination RRBJMSBindingInvoker - defer lookup of statically-configured callback response destination until needed - close request connection after use if resource factory requires it TransportReferenceInterceptor/TransportServiceInterceptor - use binding context to close response session - close response connection after use if resource factory requires it Restructure osgi tests with much cleaner osgi environment Create a dependency module which contains all osgi tests' dependencies, so osgi tests' dependencies jars won't be included in traditional classpath. Having osgi depencies jars in classpath causes many testing issues now. The purpose of this clean work is to make osgi tests running in ralative pure osgi enviroment. I also move jms tests from tandoori to apache. Jsp support is not enabled. Is org.ops4j.pax.web.jsp bundle installed? Hi, When I deploy the first time a sample.war file in the deploy folder of servicemix 4 (4.0-m2 snapshot created 13/02/2009), servicemix claims that Jsp support is not enabled and the jsp or servlet are not found (error 404) (http://localhost:8080/sample/hello.jsp & http://localhost:8080/sample/hello) Remark : I can see the page http://localhost:8080/sample/index.html 14:43:32,870 | INFO | Timer-1 | ReplaceableService | ssbox.tracker.ReplaceableService 80 | Creating replaceable service for [interface org.osgi.service.http.HttpService] 14:43:32,870 | INFO | Timer-1 | ServiceCollection | issbox.tracker.ServiceCollection 88 | Creating service collection for [interface org.osgi.service.http.HttpService] 14:43:32,870 | INFO | Timer-1 | ServiceCollection | racker.ServiceCollection$Tracker 173 | Added service with reference [[org.osgi.service.http.HttpService, org.ops4j.pax.web.service.WebContainer]] 14:43:32,870 | INFO | Timer-1 | HttpServiceFactoryImpl | .internal.HttpServiceFactoryImpl 33 | binding bundle: [file__C__Workspace_SMX4_apache-servicemix-kernel-1.1.0-SNAPSHOT_deploy_sample.war [50]] to http service 14:43:32,870 | INFO | Timer-1 | HttpServiceStarted | vice.internal.HttpServiceStarted 61 | Creating http service for: file__C__Workspace_SMX4_apache-servicemix-kernel-1.1.0-SNAPSHOT_deploy_sample.war [50] 14:43:32,885 | INFO | Timer-1 | ReplaceableService | ssbox.tracker.ReplaceableService 109 | Service changed [null] -> [org.ops4j.pax.web.service.internal.HttpServiceProxy@1bc93a7] 14:43:32,885 | INFO | Timer-1 | HttpServiceProxy | ervice.internal.HttpServiceProxy 74 | Creating adefault context 14:43:32,901 | INFO | Timer-1 | HttpServiceProxy | ervice.internal.HttpServiceProxy 156 | Setting context paramters [{webapp.context=file__C__Workspace_SMX4_apache-servicemix-kernel-1.1.0-SNAPSHOT_deploy_sample.war}] 14:43:32,917 | INFO | Timer-1 | HttpServiceProxy | ervice.internal.HttpServiceProxy 62 | Registering resource: [/] -> 14:43:33,011 | INFO | Timer-1 | JettyServerWrapper | vice.internal.JettyServerWrapper 109 | added servlet context: HttpServiceContext{httpContext=org.ops4j.pax.web.extender.war.internal.WebAppHttpContext@2b3574} 14:43:33,073 | INFO | Timer-1 | HttpServiceProxy | ervice.internal.HttpServiceProxy 166 | Registering jsps 14:43:33,073 | WARN | Timer-1 | RegisterWebAppVisitorWC | internal.RegisterWebAppVisitorWC 146 | Jsp support is not enabled. Is org.ops4j.pax.web.jsp bundle installed? 14:43:33,089 | INFO | Timer-1 | HttpServiceProxy | ervice.internal.HttpServiceProxy 96 | Registering servlet [mypackage.Hello@3c33d3] 14:43:33,105 | INFO | Timer-1 | FileMonitor | x.kernel.filemonitor.FileMonitor 544 | Started: file__C__Workspace_SMX4_apache-servicemix-kernel-1.1.0-SNAPSHOT_deploy_sample.war [50] osgi list [ 43] [Active ] [ ] [ 60] Apache ServiceMix Bundles: jetty-6.1.14 (6.1.14.SNAPSHOT) [ 44] [Active ] [ ] [ 60] OPS4J Pax Web - Web Container (0.4.1) [ 45] [Active ] [ ] [ 60] OPS4J Pax Web - Jsp Support (0.4.1) [ 46] [Active ] [ ] [ 60] OPS4J Pax Web Extender - WAR (0.3.0) [ 47] [Active ] [ ] [ 60] OPS4J Pax Web Extender - Whiteboard (0.3.0) [ 48] [Active ] [ ] [ 60] OPS4J Pax Url - war:, war-i: (0.3.2) [ 49] [Active ] [Started] [ 60] Apache ServiceMix WAR Deployer (4.0.0.m2-SNAPSHOT) [ 50] [Active ] [ ] [ 60] file__C__Workspace_SMX4_apache-servicemix-kernel-1.1.0-SNAPSHOT_deploy_sample.war (0) Deadlock triggered by FairScheduler scheduler's servlet due to changes from HADOOP-5214. When creating a new instance through admin/create, the list of default features repositories is wrong Renaming of Job histroy file is incorrect if Jobtracker is restarted multimple times After 1st JT restart job history file name was jobfilename.recover After 2and JT restart job history file name was jobfilename.recover.recover After 3rd JT restart ob history file name was jobfilename.recover.recover.recover Email attachments name can not be retrieved JMS Resources - Destination stats not suported with new AMQ5 integration JMS Resource portlets stats (Consumer Count and Queue Size) no longer work after moving to AMQ5 and throw a console exception - {noformat} 2008-12-09 15:15:51,218 ERROR [AmqJMSMessageHelper] Failed to get ActiveMQ stats javax.management.RuntimeMBeanException: RuntimeException thrown in operation addQueue at com.sun.jmx.mbeanserver.StandardMetaDataImpl.wrapRuntimeException(StandardMetaDataImpl.java:994) at com.sun.jmx.mbeanserver.StandardMetaDataImpl.invoke(StandardMetaDataImpl.java:430) at com.sun.jmx.mbeanserver.MetaDataImpl.invoke(MetaDataImpl.java:220) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:815) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:784) at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:201) at $Proxy36.addQueue(Unknown Source) at org.apache.geronimo.console.jmsmanager.helper.AmqJMSMessageHelper.getDestinationStatistics(AmqJMSMessageHelper.java:105) at org.apache.geronimo.console.jmsmanager.wizard.ListScreenHandler.populateExistingList(ListScreenHandler.java:191) at org.apache.geronimo.console.jmsmanager.wizard.ListScreenHandler.renderView(ListScreenHandler.java:76) at org.apache.geronimo.console.MultiPagePortlet.doView(MultiPagePortlet.java:144) at javax.portlet.GenericPortlet.doDispatch(GenericPortlet.java:247) at javax.portlet.GenericPortlet.render(GenericPortlet.java:175) at org.apache.pluto.core.PortletServlet.dispatch(PortletServlet.java:208) at org.apache.pluto.core.PortletServlet.doGet(PortletServlet.java:139) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:630) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:535) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:472) at org.apache.pluto.core.DefaultPortletInvokerService.invoke(DefaultPortletInvokerService.java:167) at org.apache.pluto.core.DefaultPortletInvokerService.render(DefaultPortletInvokerService.java:101) at org.apache.pluto.core.PortletContainerImpl.doRender(PortletContainerImpl.java:173) at org.apache.pluto.driver.tags.PortletTag.doStartTag(PortletTag.java:152) at jsp.WEB_002dINF.themes.portlet_002dskin_jsp._jspService(portlet_002dskin_jsp.java:87) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:630) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:535) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:472) at org.apache.jasper.runtime.JspRuntimeLibrary.include(JspRuntimeLibrary.java:968) at jsp.WEB_002dINF.themes.default_002dtheme_jsp._jspx_meth_c_005fforEach_005f0(default_002dtheme_jsp.java:196) at jsp.WEB_002dINF.themes.default_002dtheme_jsp._jspService(default_002dtheme_jsp.java:101) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:630) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at org.apache.pluto.driver.PortalDriverServlet.doGet(PortalDriverServlet.java:151) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.geronimo.tomcat.valve.DefaultSubjectValve.invoke(DefaultSubjectValve.java:56) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.apache.geronimo.tomcat.GeronimoStandardContext$SystemMethodValve.invoke(GeronimoStandardContext.java:406) at org.apache.geronimo.tomcat.valve.GeronimoBeforeAfterValve.invoke(GeronimoBeforeAfterValve.java:47) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:613) Caused by: java.lang.NullPointerException at org.apache.activemq.command.ActiveMQDestination.setPhysicalName(ActiveMQDestination.java:208) at org.apache.activemq.command.ActiveMQDestination.<init>(ActiveMQDestination.java:77) at org.apache.activemq.command.ActiveMQQueue.<init>(ActiveMQQueue.java:39) at org.apache.activemq.broker.jmx.BrokerView.addQueue(BrokerView.java:231) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at com.sun.jmx.mbeanserver.StandardMetaDataImpl.invoke(StandardMetaDataImpl.java:414) ... 63 more {noformat} include junit JAR in source dist We recently added the junit JAR under "lib" so that we can checkout & run tests, but we fail to include it in the source dist. AssertFailure when selecting rows from a table with CHARACTER and VARCHAR columns When running a complex query on this table: [code] Create table DEMO.TEST ( CHR CHARACTER(26) , VCHR VARCHAR(25) ) [code] then I get this exception: AssertFailure: ASSERT FAILED col1.getClass() (class ...SQLChar) expected to be the same as col2.getClass() (class ....SQLVarchar)' was thrown while evaluating an expression. iPOJO analyzes already installed bundle by holding a lock When iPOJO starts, it analyzes already installed bundles. However this is down by holding the lock on the Extender object. So, processed bundles are initialized by holding the lock. This can lead to a deadlock in some cases (for example when iPOJO wants to register a service and the framework thread is waiting on the Extender to add another bundle in the iPOJO management thread. missing default values for vmtype table Add default values for vmtype table: INSERT INTO `vmtype` (id, `name`) VALUES (1, 'vmware'), (2, 'xen'), (3, 'vmwareGSX'), (4, 'vmwwarefreeserver'), (5, 'vmwwareESX3'); some license headers and xml declarations missing after maven-release-plugin In r748808 "[maven-release-plugin] prepare release ..." the xml declaration and license header, were removed. Sporadic failures in TestRemoteSearchable.cs NUnit test TestRemoteSearchable.cs fails with SocketException. Lucene.Net.Search.TestRemoteSearchable.TestPhraseQuery: System.Net.Sockets.SocketException : Only one usage of each socket address (protocol/network address/port) is normally permitted at System.Runtime.Remoting.Channels.Http.HttpServerChannel.StartListening(Object data) at System.Runtime.Remoting.Channels.Http.HttpServerChannel.SetupChannel() at System.Runtime.Remoting.Channels.Http.HttpServerChannel..ctor(Int32 port) at System.Runtime.Remoting.Channels.Http.HttpChannel..ctor(Int32 port) at Lucene.Net.Search.TestRemoteSearchable.SetUp() in D:\svn_lucene\src\Test\Search\TestRemoteSearchable.cs:line 47 tjk :) servicemix-camel-service-unit archetype depends on non-existent servicemix-camel jbi-component servicemix-camel archetype uses servicemix-version for servicemix-camel component and not the component version. Stest code invalid ignore tag in test case ASM_0024 This item is illegally place in test case ASM_0024. @Ignore("TUSCANY-2925") FileSystem.CACHE should be ref-counted FileSystem.CACHE is not ref-counted, and could lead to resource leakage. Upload attachment form in futon does not work Using 'upload attachment' form in futon. The server returns 405 and the upload form hangs with full progress bar. File loader called incorrectly from rails driver. No way to specify load dir The file loader call from driver doesn't work. Besides, the form doesn't allow the user to specify the filestore directory. [hive] lot of mappers due to a user error while specifying the partitioning column A common scenario when the table is partitioned on 'ds' column which is of type 'string' of a certain format 'yyyy-mm-dd' However, if the user forgets to add quotes while specifying the query: select ... from T where ds = 2009-02-02 2009-02-02 is a valid integer expression. So, partition pruning makes all partitions unknown, since 2009-02-02 to double conversion is null. If all partitions are unknown, in strict mode, we should thrown an error Error message: "impossible to get artifacts when data has not been loaded" , on certain modules only I found a similiar post here: http://www.mail-archive.com/ivy-user@ant.apache.org/msg01766.html Everything worked fine with 2.0.0 beta . > > It's just a simple test and dom4j seems to cause the problem with > standard ivysettings : > <dependencies> > <dependency org="dom4j" name="dom4j" rev="1.6.1"/> > <dependency org="log4j" name="log4j" rev="1.2.9"/> > <dependency org="junit" name="junit" rev="4.5"/> > </dependencies> > </ivy-module> Removing dom4j makes it work again. So I dont know, is it dom4j or ivy causing the problem? > output: > > init: > deps-jar: > ivy-retrieve: > No ivy:settings found for the default reference 'ivy.instance'. A > default instance will be used > no settings file found, using default... > :: loading settings :: url = > jar:file:/home/kostja/VZG/soa-workarea-ref/tools/ant/lib/ivy-2.0.0-rc2.jar!/org/apache/ivy/core/settings/ivysettings.xml > :: resolving dependencies :: gbv.de#test;working@myhome > confs: [default] > found dom4j#dom4j;1.6.1 in public > found xml-apis#xml-apis;1.0.b2 in public > found jaxme#jaxme-api;0.3 in public > found jaxen#jaxen;1.1-beta-6 in public > found jdom#jdom;1.0 in public > found xerces#xmlParserAPIs;2.6.2 in public > found xerces#xercesImpl;2.6.2 in public > found xom#xom;1.0b3 in public > found com.ibm.icu#icu4j;2.6.1 in public > found org.ccil.cowan.tagsoup#tagsoup;0.9.7 in public > found msv#xsdlib;20030807 in public > found msv#relaxngDatatype;20030807 in public > found pull-parser#pull-parser;2 in public > found xpp3#xpp3;1.1.3.3 in public > found stax#stax-api;1.0 in public > found log4j#log4j;1.2.9 in public > found junit#junit;4.5 in public > > :: problems summary :: > :::: ERRORS > impossible to get artifacts when data has not been loaded. > IvyNode = xalan#xalan;2.5.1 > :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS > /home/kostja/VZG/soa-workarea-ref/projects/Test/build.xml:15: impossible > to resolve dependencies: > java.lang.IllegalStateException: impossible to get artifacts > when data has not been loaded. IvyNode = xalan#xalan;2.5.1 > BUILD FAILED (total time: 0 seconds) javadoc warning in JMXGet {noformat} [javadoc] .\src\hdfs\org\apache\hadoop\hdfs\tools\JMXGet.java:45: warning: sun.management.ConnectorAddressLink is Sun proprietary API and may be removed in a future release [javadoc] import sun.management.ConnectorAddressLink; [javadoc] ^ [javadoc] Standard Doclet version 1.6.0_07 [javadoc] Building tree for all the packages and classes... [javadoc] .\src\hdfs\org\apache\hadoop\hdfs\tools\JMXGet.java:139: warning - @param argument "args" is not a parameter name. {noformat} AbstractIoSession sometimes throws java.lang.Error in toString() Sometimes, like just after a session has been closed, e.g. due to an IOException of some kind, the toString() often throws the given error due to some nio internal error. This should probably be caught by AbstractIoSession.toString() to avoid external user error? java.lang.Error: java.net.SocketException: Socket operation on nonsocket: getsockname at sun.nio.ch.Net.localAddress(Net.java:125) at sun.nio.ch.SocketChannelImpl.localAddress(SocketChannelImpl.java:430) at sun.nio.ch.SocketAdaptor.getLocalAddress(SocketAdaptor.java:147) at java.net.Socket.getLocalSocketAddress(Socket.java:697) at org.apache.mina.transport.socket.nio.NioSocketSession.getLocalAddress(NioSocketSession.java:132) at org.apache.mina.transport.socket.nio.NioSocketSession.getLocalAddress(NioSocketSession.java:47) at org.apache.mina.core.session.AbstractIoSession.toString(AbstractIoSession.java:1125) Fix TestInfoServers TestInfoServers is broken in Hudson. Mina xbean module requires mina-core for compile scope The mina-integration-xbean module has mina-core but scope is set to test even though there are classes required for compilation. This happens to work with Maven 2.0.x but I think it is coincidence. This will be broken with soon to be released Maven 2.1. BinaryProtocolAccelerated and BinaryProtocol don't produce the same bytes when writes aren't strict The Ruby and C versions do slightly different things: 1) 'Thrift::BinaryProtocolAccelerated should write the message header without version when writes are not strict' FAILED expected: "\000\000\000\vtestMessage\001\000\000\000\021", got: "\000\000\000\vtestMessage\003\000\000\000\021" (using ==) Some of the C# runtime library files say Copyright (C) 2007 imeem, inc. <http://www.imeem.com> All rights reserved. Some of the C# library files contain Thrift software license terms. E.g.: http://svn.apache.org/repos/asf/incubator/thrift/trunk/lib/csharp/src/TProcessor.cs Others contain a closed license copyright notice. E.g.: http://svn.apache.org/repos/asf/incubator/thrift/trunk/lib/csharp/src/Transport/TTransport.cs // Copyright (C) 2007 imeem, inc. <http://www.imeem.com> // All rights reserved. Here's the list of files that contain the closed licensing terms: lib/csharp/src/Transport/TServerSocket.cs lib/csharp/src/Transport/TServerTransport.cs lib/csharp/src/Transport/TSocket.cs lib/csharp/src/Transport/TStreamTransport.cs lib/csharp/src/Transport/TTransport.cs lib/csharp/src/Transport/TTransportException.cs lib/csharp/src/Transport/TTransportFactory.cs The CXF NMR transport does not use the given URI to identify the NMR endpoint It always uses the QName of the service, which leads to the URI not being used at all (which is somewhat confusing). It should be changed. HTTPCORE-193 could cause Synapse to not process an SSL response, when the connection in closed - in some environments Refer HTTPCORE-193 Make HTML (doc) generator generate links to parent services for extended services Trivial fixes to the html generator: http://github.com/vicaya/thrift/commit/7d31d4c03db52816c2521cf37f51e54831cd2b5b but it makes services that make use of service inheritance much easier to navigate. stats.jsp XML escaping stats.jsp gave this error: Line Number 1327, Column 48: <stat name="item_attrFacet_Size_&_Shape" stat names are not XML escaped. PKG_CHECK_MODULES() in configure.ac assumes pkg-config is installed In configure.ac, there is a use of PKG_CHECK_MODULES() to look for Mono. This assumes that pkg-config is installed, but that is not in the list of dependencies. So... one of the following must happen: * list pkg-config as a dependency * rewrite configure to use $PATH/pkg-config directly, and fallback to other mechanisms to locate mono (or just flag it as "no" right away) * rewrite configure to only use "legacy" lookup for mono For now, I just commented the stuff out of configure.ac IPC client drops interrupted exceptions The IPC client needless drops InterruptedException. HStoreKey: Wrong comparator logic During fixing fail of TestCompaction JUnit was found error in removing of row Cells. Reason was a error in comparator logic of HStoreKey. Fix of HStoreKey also fixed removing of row Cell and and TestCompaction. TestTable.testCreateTable broken Test is broken, we seem to be able to create the same table 10x over. ouch! No pagination in Futon for reduce views Futon doesn't support paginating of reduce views at the moment, which can be confusing for new users. This is due to the difficulty of efficiently working out the total number of rows available from a reduce view. I propose displaying something like "Showing x-y rows of unknown" at the bottom, and showing a next/previous link if there are more results to be displayed. An efficient way to calculate whether there are next/previous results would be to fetch 1 + rows_per_page + 1 (with appropriate offset parameter etc.) I did start working on a patch - will post it here when it is done. estimatedShipDate and estimatedDeliveryDate are not getting updated while editing order items for Purchase order estimatedShipDate and estimatedDeliveryDate are not getting updated while editing order items for Purchase order when we don't provide it for all order items. Also the code snippet for this needs to be updated to be more specific. [classlib][archive] Manifest file with empty line proves IOException I cannot start some of my apps, because their jar-files contain manifests with empty lines, proving IOException on Harmony. The situation can be reproduced by manifest file with empty line creation and invoking new Manifest(InputStream). Manifest files which proves this will be attached. Test case output: $ /cygdrive/c/Harmony_to_run/trunk/working_vm/build/win_ia32_msvc_release/deploy/jdk/jre/bin/java Test MANIFEST2.MF Uncaught exception in main: java.io.IOException: Invalid attribute at java.util.jar.InitManifest.addAttribute(InitManifest.java:282) at java.util.jar.InitManifest.<init>(InitManifest.java:71) at java.util.jar.Manifest.read(Manifest.java:173) at java.util.jar.Manifest.<init>(Manifest.java:76) at Test.main(Test.java:8) $ /cygdrive/w/UBS/Builds/jdk1.6.0_win32/bin/java Test MANIFEST2.MF end Test case is: import java.io.FileInputStream; import java.util.jar.Manifest; public class Test { public static void main(String[] args) throws Exception { String fileName = args[0]; FileInputStream fs = new FileInputStream(fileName); Manifest m = new Manifest(fs); fs.close(); System.out.println("end"); } } syntax error in benchmark.rb lib/rb/benchmark % ruby benchmark.rb benchmark/benchmark.rb:217: syntax error, unexpected ')' ...= labels_and_values.map { |(l,)| Array === l ? l.first : l } ... $PREFIX/etc/default/couchdb not installed '/usr/local/etc/default/couchdb isn't installed on a fresh checkout of trunk under Ubuntu 8.10. As a result, the default init script creates couchdb.stdout and couchdb.stderr files in the current directory when run. MakeRequestHandler does not use REFRESH param to affect internal caching of request POSIX shell incompatibilities > My local "checkbashisms" doesn't seem to complain at all. The [ ... ] syntax is just a shortcut for "test" and I would prefer to avoid it if possible. Could you check for me that using "test expr" wouldn't work in its place. No, "test expr" doesn't work. After test you can only have arguments to test, not shell expressions. It is possible of course to do: if test `echo 2> /dev/null >> $PID_FILE; echo $?` -gt 0; then For proof, see the man page: http://docs.sun.com/app/docs/doc/816-5165/test-1?l=en&q=man&a=view > This seems reasonable, though checkbashisms doesn't report anything. > > I am wondering if your Solaris shell is POSIXly correct. Could you provide me with a pointer to its manual, please? It doesn't get more POSIX than Solaris :). $ /bin/sh $ echo $(echo yes) syntax error: `(' unexpected Here is the man page. Note that the "Command substitution" section doesn't mention anything about $() syntax. http://docs.sun.com/app/docs/doc/816-5165/sh-1?l=en&q=man&a=view You may want to have a our collaborative notes over there: - http://wiki.joyent.com/accelerators:setup-couchdb - http://discuss.joyent.com/viewtopic.php?id=24108 TermSpans skipTo() doesn't always move forwards In TermSpans (or the anonymous Spans class returned by SpansTermQuery, depending on the version), the skipTo() method is improperly implemented if the target doc is less than or equal to the current doc: public boolean skipTo(int target) throws IOException { // are we already at the correct position? if (doc >= target) { return true; } ... This violates the correct behavior (as described in the Spans interface documentation), that skipTo() should always move forwards, in other words the correct implementation would be: if (doc >= target) { return next(); } This bug causes particular problems if one wants to use the payloads feature - this is because if one loads a payload, then performs a skipTo() to the same document, then tries to load the "next" payload, the spans hasn't changed position and it attempts to load the same payload again (which is an error). script error when clicking the update button in portlet console ---server---Server Logs -- Log Manager reproduce steps: 1, Use IE to login to console. 2, Click the update button in portlet console ---> server ---> Server Logs --> Log Manager expected result: no error actual result: there's a script error " logFile is not defined" TestMapReduceLocal is missing a close() that is causing it to fail while running the test on NFS The readFile method in this test is not calling a close of the file after it is done reading. This causes some lingering .nfs* files that is preventing the directory from getting deleted properly causing the second program in this test to fail. DIH: java.lang.IndexOutOfBoundsException with useSolrAddSchema in XPathEntityProcessor Index checks are not done properly in XPathEntityProcessor when useSolrAddSchema is used {code} Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3 at java.util.ArrayList.RangeCheck(ArrayList.java:546) at java.util.ArrayList.get(ArrayList.java:321) at org.apache.solr.handler.dataimport.XPathEntityProcessor.readRow(XPathEntityProcessor.java:266) at org.apache.solr.handler.dataimport.XPathEntityProcessor.access$100(XPathEntityProcessor.java:53) at {code} mail : http://markmail.org/message/zd7whumzxvy3b2mx Implement getOperationalInfo method The getOperationalInfo method of the inquiry API needs to be implemented. BlockByteArrayInputStream read method is not thread safe The read method can access an invalid pointer depending on its state when the close method is called from another thread. NPE in HStoreScanner.updateReaders 2009-01-01 23:55:41,629 FATAL org.apache.hadoop.hbase.regionserver.MemcacheFlusher: Replay of hlog required. Forcing server shutdown org.apache.hadoop.hbase.DroppedSnapshotException: region: content,cff13605e2ea6ce0b221ac864687bf08,1230777531253 at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:880) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:773) at org.apache.hadoop.hbase.regionserver.MemcacheFlusher.flushRegion(MemcacheFlusher.java:227) at org.apache.hadoop.hbase.regionserver.MemcacheFlusher.run(MemcacheFlusher.java:137) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HStoreScanner.updateReaders(HStoreScanner.java:322) at org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:737) at org.apache.hadoop.hbase.regionserver.HStore.updateReaders(HStore.java:725) at org.apache.hadoop.hbase.regionserver.HStore.internalFlushCache(HStore.java:694) at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:630) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:865) ... 3 more Cassandra silently loses data when a single row gets large When you insert a large number of columns in a single row, Cassandra silently loses some of these inserts. This does not happen until the cumulative size of the columns in a single row exceeds several megabytes. Say each value is 1MB large, insert("row", "col0", value, timestamp) insert("row", "col1", value, timestamp) insert("row", "col2", value, timestamp) ... ... insert("row", "col100", value, timestamp) Running: get_column("row", "col0") get_column("row", "col1") ... .. get_column("row", "col100") The sequence of get_columns will fail at some point before 100. This was a problem with the old code in code.google also. I will attach a small program that will help you reproduce this. 1. This only happens when the cumulative size of the row exceeds several megabytes. 2. In fact, the single row should be large enough to trigger an SSTable flush to trigger this error. 3. No OutOfMemory errors are thrown, there is nothing relevant in the logs. Add another Export to the XMLBeans bundle I use the xmlbeans bundle provided here: http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.xmlbeans/2.4.0_1/org.apache.servicemix.bundles.xmlbeans-2.4.0_1.pom Of xmlbeans, I use org.apache.xmlbeans.impl.xb.xsdschema.SchemaDocument to parse an xsd file and read a bunch of things from that. I added the imports org.apache.xmlbeans and org.apache.xmlbeans.impl.xb.xsdschema and when I start the bundle, I get the following exception: ---------- Caused by: java.lang.ExceptionInInitializerError at org.apache.xmlbeans.impl.xb.xsdschema.SchemaDocument$Factory.parse(SchemaDocument.java:778) at ... at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:100) ... 17 more Caused by: java.lang.RuntimeException: Cannot load SchemaTypeSystem. Unable to load class with name schemaorg_apache_xmlbeans.system.sXMLSCHEMA.TypeSystemHolder. Make sure the generated binary files are on the classpath. at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(XmlBeans.java:783) at org.apache.xmlbeans.impl.xb.xsdschema.SchemaDocument.<clinit>(SchemaDocument.java:19) ... 25 more Caused by: java.lang.ClassNotFoundException: *** Class 'schemaorg_apache_xmlbeans.system.sXMLSCHEMA.TypeSystemHolder' was not found. Bundle 66 does not import package 'schemaorg_apache_xmlbeans.system.sXMLSCHEMA', nor is the package exported by any other bundle or available from the system class loader. *** at org.apache.felix.framework.searchpolicy.R4SearchPolicyCore.findClass(R4SearchPolicyCore.java:198) at org.apache.felix.framework.searchpolicy.R4SearchPolicy.findClass(R4SearchPolicy.java:45) at org.apache.felix.framework.searchpolicy.ContentClassLoader.loadClass(ContentClassLoader.java:109) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at org.apache.xmlbeans.XmlBeans.typeSystemForClassLoader(XmlBeans.java:769) ... 26 more Caused by: java.lang.ClassNotFoundException: schemaorg_apache_xmlbeans.system.sXMLSCHEMA.TypeSystemHolder at org.apache.felix.framework.searchpolicy.R4SearchPolicyCore.findClassOrResource(R4SearchPolicyCore.java:486) at org.apache.felix.framework.searchpolicy.R4SearchPolicyCore.findClass(R4SearchPolicyCore.java:185) ... 30 more ---------- ... with bundle 66 being the xmlbeans bundle. When I look at the pom and manifest of the xmlbeans-2.4.0_1.jar, that export is not present and the exception makes sense. Thus my request: please add "schemaorg_apache_xmlbeans" to the exported packages... If I knew how to build the custom jar myself I would happily test and verify the export myself and give you guys the complete list of required exports. Thank you! Timeout for Check Job should be equal to mex timeout In org.apache.ode.bpel.engine.BpelRuntimeContextImpl#scheduleInvokeCheck, a 3mn timeout is hardcoded. This could cause problem if the mex timeout is greater than 3mn. Actually the check job will be executed and the mex failed. So the correct timeout should be equal or slightly greater than the mex timout. Fix for NPE's in Spatial Lucene for searching bounding box only NPE occurs when using DistanceQueryBuilder for minimal bounding box search without the distance filter. Parameter to UDF which is an alias returned in another UDF in nested foreach causes incorrect results Consider the following Pig Script {code} register myudf.jar; A = load 'one.txt' using PigStorage() as ( one: int ); --use this dummy file to start execution B = foreach A { dec = myudf.URLDECODE('hello'); str1 = myudf.REPLACEALL(dec, '[\\u0000-\\u0020]', ' '); -- ERROR str2 = myudf.REPLACEALL('hello', '[\\u0000-\\u0020]', ' '); generate dec, str1, str2; }; describe B; dump B; {code} where one.txt is a file which contains number one (1) for starting execution of the Pig script!! {code} describe B; {code} returns the following: B: {urldecode_9: chararray,replaceall_urldecode_10_11: chararray,replaceall_12: chararray} {code} dump B; {code} returns (hello,[\u0000-\u0020],hello) The result should be: There is a workaround for the same, {code} register myudf.jar; A = load 'one.txt' using PigStorage() as ( one: int ); B = foreach A { dec = myudf.URLDECODE('hello'); generate dec as dec, myudf.REPLACEALL(dec, '[\\u0000-\\u0020]', ' ') as str1, myudf.REPLACEALL('hello', '[\\u0000-\\u0020]', ' ') as str2; }; describe B; dump B; {code} where {code} dump B; {code} returns (hello,hello,hello) A CXF-BC provider used with WS-RM sends the CreateSequence request without SOAP envelope This bug is a little symetric of SMXCOMP-446 (a CXF-BC consumer with WS-RM sends CreateSequenceResponse with a void body) but the solution (add a bareOut interceptor) do nothing. UndeclaredThrowable exception using atom-binding with generic Collection interface When using the org.apache.tuscany.sca.data.collection.Collection interface rather than the org.apache.tuscany.sca.binding.atom.Collection interface, an UndeclaredThrowableException is thrown whenever the invoker receives a 404 error from the http client. The invoker is always throwing NotFoundException from the atom binding but should be throwing NotFoundException from the data-api module in this case. UrlValidator property is duplicated in Application_nl.properties UrlValidator property is twice in Application_nl.properties in 37 and 46 lines. Delete a not existing document results in HTTP status 200 - OK Instead of an HTTP-Error status code like 404 deletion of a non-existing document results in HTTP 200 - OK: $ curl -i -X PUT http://localhost:5984/some_database && curl -i -X DELETE http://localhost:5984/some_database/not_existing HTTP/1.1 412 Precondition Failed Server: CouchDB/0.9.0a756409 (Erlang OTP/R12B) Date: Fri, 20 Mar 2009 10:45:10 GMT Content-Type: text/plain;charset=utf-8 Content-Length: 95 Cache-Control: must-revalidate {"error":"file_exists","reason":"The database could not be created, the file already exists."} HTTP/1.1 200 OK Server: CouchDB/0.9.0a756409 (Erlang OTP/R12B) Etag: "1-754167300" Date: Fri, 20 Mar 2009 10:45:10 GMT Content-Type: text/plain;charset=utf-8 Content-Length: 52 Cache-Control: must-revalidate {"ok":true,"id":"not_existing","rev":"1-754167300"} Versions missing in plugins: as a result many plugins cannot be added to framework server Some plugins are missing versions in thieir geronimo-plugin.xml dependencies. As a result when you try to install these plugins into the framework server geronimo can't figure out which version to use and installation fails. Jetty and console-jetty are a couple obvious examples. Applying the patch does not AFAICT change the contents of the server nor the relationship between the plugins. It only makes sure all geronimo-plugin.xml dependencies have versions. Generated Schemas are missing import statements In CXF 2.1.4 schemas generated by Aegis contained import statements to import other generated schemas. Even if these statements might be optional from a schema perspective, these statements are needed when processing the SOAP messages in an Adobe Flex SOAPDecoder. To make everything a litte clearer: Old WSDL: <xsd:schema xmlns:ns1="http://impl.common.pgm.model.pgm.tiller.upw.de" xmlns:tns="http://credentials.pgm.model.pgm.tiller.upw.de" attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://credentials.pgm.model.pgm.tiller.upw.de"> <import xmlns="http://www.w3.org/2001/XMLSchema" namespace="http://impl.common.pgm.model.pgm.tiller.upw.de"/> <xsd:complexType abstract="true" name="PgmCredential"> <xsd:complexContent> <xsd:extension base="ns1:AbstractDatabaseObject"> <xsd:sequence> <xsd:element minOccurs="0" name="name" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> <xsd:complexType name="ArrayOfPgmCredential"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="PgmCredential" nillable="true" type="tns:PgmCredential"/> </xsd:sequence> </xsd:complexType> <xsd:complexType name="PgmUsernamePasswordCredential"> <xsd:complexContent> <xsd:extension base="tns:PgmCredential"> <xsd:sequence> <xsd:element minOccurs="0" name="password" nillable="true" type="xsd:string"/> <xsd:element minOccurs="0" name="username" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> <xsd:complexType name="PgmX509CertificateCredential"> <xsd:complexContent> <xsd:extension base="tns:PgmCredential"> <xsd:sequence> <xsd:element minOccurs="0" name="certificate" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> </xsd:schema> New WSDL: <xsd:schema xmlns:ns0="http://impl.common.pgm.model.pgm.tiller.upw.de" xmlns:tns="http://credentials.pgm.model.pgm.tiller.upw.de" attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://credentials.pgm.model.pgm.tiller.upw.de"> <xsd:complexType abstract="true" name="PgmCredential"> <xsd:complexContent> <xsd:extension base="ns0:AbstractDatabaseObject"> <xsd:sequence> <xsd:element minOccurs="0" name="name" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> <xsd:complexType name="ArrayOfPgmCredential"> <xsd:sequence> <xsd:element maxOccurs="unbounded" minOccurs="0" name="PgmCredential" nillable="true" type="tns:PgmCredential"/> </xsd:sequence> </xsd:complexType> <xsd:complexType name="PgmUsernamePasswordCredential"> <xsd:complexContent> <xsd:extension base="tns:PgmCredential"> <xsd:sequence> <xsd:element minOccurs="0" name="password" nillable="true" type="xsd:string"/> <xsd:element minOccurs="0" name="username" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> <xsd:complexType name="PgmX509CertificateCredential"> <xsd:complexContent> <xsd:extension base="tns:PgmCredential"> <xsd:sequence> <xsd:element minOccurs="0" name="certificate" nillable="true" type="xsd:string"/> </xsd:sequence> </xsd:extension> </xsd:complexContent> </xsd:complexType> </xsd:schema> The problem is the missing: <import xmlns="http://www.w3.org/2001/XMLSchema" namespace="http://impl.common.pgm.model.pgm.tiller.upw.de"/> Aegis makes bogus XML Schema in several cases Turning on the new validation-of-schema feature, many, many, Aegis unit tests fail. (-PvalidateSchemas) SubsetConfiguration ignores local StrLookups For an AbstractConfiguration it is normally possible to register local StrLookup instances. These are simply ignored by the SubsetConfiguration. Failed to throw EntityExistException on duplicated persist in DB2 TestException.testThrowsEntityExistsException() failed when run against DB2 due to incorrect exception being thrown from OpenJPA. The test is expecting EntityExistsException but instead a RolledbackException with nested PersistenceException is thrown. Albert Lee. Compilation issues (Java 1.5) in SysPropPartition example project There are some compilation issues (Java 1.5) with the SysPropPartition example project that can be found here [1] and which is referenced on this page [2] [1] - http://svn.apache.org/repos/asf/directory/sandbox/szoerner/syspropPartition [2] - http://cwiki.apache.org/DIRxSBOX/draft-how-to-write-a-simple-custom-partition-for-apacheds.html I'll attach a patch for this Shindig GadgetSpec returns invalid XML with UserPrefs and EnumValue The toString method of UserPref returns invalid XML with EnumValue, the elements contain the "value" attribute twice. The attached, trivial patch fixes this. IndexReader.clone can leave files open I hit this in working on LUCENE-1516. When not using compound file format, if you clone an IndexReader, then close the original, then close the clone, the stored fields files (_X.fdt, _X.fdx) remain incorrectly open. I have a test showing it; fix is trivial. Will post patch & commit shortly. ClassCastException while Architecture on a composite with a provided service An exception occurred while I try to see the architecture of a composite which provide a service. --------------------------------------------------------------------------------------------------------------------------- metadata <composite name="RemoteServiceManagerComposite" architecture="true"> ..... <instance component="Remote Service Manager" /> <provides action="export" specification="fr.imag.adele.homega.framework.remote.compendium.DistributionProvider"/> </composite> ---------------------------------------------------------------------------------------------------------------------------- Exception -> arch -instance Remote Services Manager Unable to execute command: java.lang.ClassCastException: org.apache.felix.ipojo.composite.service.provides.SpecificationMetadata cannot be cast to org.apace.felix.ipojo.composite.service.provides.ProvidedService java.lang.ClassCastException: org.apache.felix.ipojo.composite.service.provides.SpecificationMetadata cannot be cast to org.apache.felix.ipojo.composite.service.provides.ProvidedService at org.apache.felix.ipojo.composite.service.provides.ProvidedServiceHandlerDescription.getHandlerInfo(ProvidedServiceHandlerDescription.java:68) at org.apache.felix.ipojo.architecture.InstanceDescription.getDescription(InstanceDescription.java:163) at org.apache.felix.ipojo.composite.CompositeInstanceDescription.getDescription(CompositeInstanceDescription.java:78) at org.apache.felix.ipojo.arch.ArchCommandImpl.__printInstance(ArchCommandImpl.java:172) at org.apache.felix.ipojo.arch.ArchCommandImpl.printInstance(ArchCommandImpl.java) at org.apache.felix.ipojo.arch.ArchCommandImpl.__execute(ArchCommandImpl.java:103) at org.apache.felix.ipojo.arch.ArchCommandImpl.execute(ArchCommandImpl.java) at org.apache.felix.shell.impl.Activator$ShellServiceImpl.executeCommand(Activator.java:276) at org.apache.felix.shell.tui.Activator$ShellTuiRunnable.run(Activator.java:167) at java.lang.Thread.run(Unknown Source) Peer generation, using Peer.vm does not call super class' methods but always calls BasePeer.method(...) When generating the om, the *Peer generated classes call BasePeer methods when calling to the original method, This ignores the base class. On generation, it is possible to define a basePeer object, but it is irrelevant at the current state since it is not used within the static generated class. To fix it, I have replaced in torque-gen.XX.jar in the file om/Peer.vm all calls to BasePeer.method(...) by ${table.BasePeer}.method(...). Is there a better solution? Or anything wrong with this suggestion? Thrown exception reveals passwords If an exception occurs accessing a FileObject on a FileSystem that is addressed with an URL containing user and password the thrown exception contains the password as part of the error message: org.apache.commons.vfs.FileSystemException: Could not connect to SFTP server at "sftp://user:password@apache.org/". In such a case the URL should be printed as "sftp://user:***@apache.org/". Same applied to log messages - at least for INFO and higher. This is a security risk, since in big companies exceptions and logs are normally collected and archived in monitoring systems and may reveal the password to persons that have normally no authorization to the target system. Documentation for Makefault is missing end tags. The end tags for reason, node, role and detail are all missing from the Synapse Configuration Language page. http://synapse.apache.org/Synapse_Configuration_Language.html#makefault <makefault [version="soap11|soap12|pox"]> <code (value="literal" | expression="xpath")/> <reason (value="literal" | expression="xpath")> <node>? <role>? <detail>? </makefault> Several source files include Windows EoL chars Several of the doc files include the Windows Ctrl+M chars at the end of lines when checked out to non-Windows platforms (like MacOSX and Linux), due to the committer not using the ASF suggested svn config values - http://www.apache.org/dev/svn-eol-style.txt From http://www.apache.org/dev/version-control.html - Configuring the Subversion client Committers will need to properly configure their svn client. One particular issue is OS-specific line-endings for text files. When you add a new text file, especially when applying patches from Bugzilla, first ensure that the line-endings are appropriate for your system, then do ... svn add test.txt svn propset svn:eol-style native test.txt Your svn client can be configured to do that automatically for some common file types. Add the contents of the file http://www.apache.org/dev/svn-eol-style.txt to your ~/.subversion/config file. [Note: for Windows this is normally found at C:\Documents and Settings\{username}\Application Data\Subversion\config] Some files may need additional properties to be set, for example svn:executable=* should be applied to those script files (e.g. .bat, .cgi, .cmd, .sh) that are intended to be executed. Since not all such files are necessarily intended to be executed, the executable property should not be made an automatic default. However, you should still pay attention to the messages from your svn client when you do 'svn commit'. JarFileClassLoader allows resources to be loaded from locations outside of the directory specified in its classpath If JarFileClassLoader contains one classpath entry that is a directory, it will allow resources to be loaded from ANY directory on the file system. The JarFileClassLoader should of course only allow resources to be loaded from within the directory specified. add wss4j and xmlsec bundle in cxf feature ensure wss4j and xmlsec bundle get installed before cxf bundle, so that optional org.apache.ws.security package could be resolved Escaped ampersands in xml import need to be reencoded While trying to import {code:xml} <PostalAddress toName="To" stateProvinceGeoId="NJ" postalCode="08873" countryGeoId="USA" contactMechId="001" city="SOMERSET" attnName="Steve" address2="100 Some Ave" address1="First&amp;Broadway"/> {code} got the following exception. I think that the recent security stuff encodes the xml so it is no longer valid during the reader.parse call in org.ofbiz.webtools.WebToolsServices.parseEntityXmlFile(...) My solution is to make a call to {code} xmltext= StringUtil.replaceString(xmltext, "&", "\&amp;"); {code} before reader.parse is called {code} An error occurred saving the data, rolling back transaction (true) Exception: org.xml.sax.SAXException Message: Error storing value ---- stack trace --------------------------------------------------------------- org.ofbiz.entity.GenericEntityException: Error while inserting: [GenericEntity:PartyRelationship]... javolution.xml.sax.XMLReaderImpl.parseAll(Unknown Source) javolution.xml.sax.XMLReaderImpl.parse(Unknown Source) org.ofbiz.entity.util.EntitySaxReader.parse(EntitySaxReader.java:258) org.ofbiz.entity.util.EntitySaxReader.parse(EntitySaxReader.java:209) org.ofbiz.webtools.WebToolsServices.parseEntityXmlFile(WebToolsServices.java:459) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.ofbiz.service.engine.StandardJavaEngine.serviceInvoker(StandardJavaEngine.java:96) org.ofbiz.service.engine.StandardJavaEngine.runSync(StandardJavaEngine.java:54) org.ofbiz.service.ServiceDispatcher.runSync(ServiceDispatcher.java:384) org.ofbiz.service.ServiceDispatcher.runSync(ServiceDispatcher.java:213) org.ofbiz.service.GenericDispatcher.runSync(GenericDispatcher.java:148) org.ofbiz.webtools.WebToolsServices.entityImport(WebToolsServices.java:203) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.ofbiz.service.engine.StandardJavaEngine.serviceInvoker(StandardJavaEngine.java:96) org.ofbiz.service.engine.StandardJavaEngine.runSync(StandardJavaEngine.java:54) org.ofbiz.service.ServiceDispatcher.runSync(ServiceDispatcher.java:384) org.ofbiz.service.ServiceDispatcher.runSync(ServiceDispatcher.java:213) org.ofbiz.service.GenericDispatcher.runSync(GenericDispatcher.java:148) org.ofbiz.webapp.event.ServiceEventHandler.invoke(ServiceEventHandler.java:328) org.ofbiz.webapp.control.RequestHandler.runEvent(RequestHandler.java:530) org.ofbiz.webapp.control.RequestHandler.doRequest(RequestHandler.java:328) org.ofbiz.webapp.control.ControlServlet.doGet(ControlServlet.java:201) org.ofbiz.webapp.control.ControlServlet.doPost(ControlServlet.java:77) javax.servlet.http.HttpServlet.service(HttpServlet.java:710) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.ofbiz.webapp.control.ContextFilter.doFilter(ContextFilter.java:259) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) org.ofbiz.catalina.container.CrossSubdomainSessionValve.invoke(CrossSubdomainSessionValve.java:44) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) java.lang.Thread.run(Unknown Source) --------------------------------------------------------------- {code} RouteBuilderRef - does not work with injected endpoints When using *routeBuilderRef* instead of *package* to configure route builder in spring XML then the former does not work if you have eg endpoint defined as well and injected the endpoint using {{EndpointInjected}} {code} @EndpointInject(name = "data") protected Endpoint data; public void configure() throws Exception { // configure a global transacted error handler errorHandler(transactionErrorHandler(required)); from(data) ... } {code} And the Spring DSL {code:xml} <bean id="route" class="org.apache.camel.itest.tx.JmsToHttpWithRollbackRoute"/> <!-- Camel context --> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <!-- use our route --> <routeBuilder ref="route"/> <!-- define our data endpoint as the activemq queue we send a message to --> <endpoint id="data" uri="activemq:queue:data"/> </camelContext> {code} fix groovy.xml for simple example we need remove {code} NormalizedMessage out = exchange.createMessage(); out.setContent(new StringSource("&lt;response>" + bindings.get("answerGroovy") + "&lt;/response>")); exchange.setMessage(out, "out"); println exchange; println "Stopping JSR-223 groovy processor"; {code} from the groovy script since the MessageExange sent from quartz endpoint is InOnly and we need add disableOutput="true" for scripting:endpoint to avoid setting target for the scripting endpoint in the example WebBeansPhaseListener has a static field which gets (potetially) initialized before the OWB context is initialized The WebBeansPhaseListener gets instantiated and initialized whilst the JSF servlet context listener gets executed - therefore this can (potentially) be before that the OWB servlet context listener has initialized the OWB context. In that case the resolution of the static field "manager" leads to a null value (because of the "rootActivity" has not been set so far) and will never be updated - so the phase listener will never work properly. WebBeansELResolver sets the "propertyResolved" flag to true regardless of the resolvation success WebBeansELResolver sets the "propertyResolved" flag to true regardless of the resolvation success - therefore no other resolvers (e.g. standard JSF EL-Resolver) get the chance to resolve the property. GShell commands references not cleaned up when bundle uninstalled When a gshell command bundle is updated or uninstalled/installed, there is a reference to the previously installed command service somewhere. When you rn the command after the resinstall or update, you get something like the following error: smx@cgcmac1:sigar> help ERROR CommandLineExecutionFailed: org.apache.geronimo.gshell.command.CommandException: org.springframework.osgi.service.ServiceUnavailableException: service with id=[72] unavailable In this case, service ID 72 refers to the original service for this command. I assume that the command registry has a stale reference to the original service. This isn't a major issue since reinstallation of command bundles will mostly be a developer issue, but documenting it here as a reminder. Running "find system" command leads to a stack overflow This was reproduced when running the command from smx 4.0-m2-SNAPSHOT distribution. here's the output: {code} smx@root:/> find system /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1/scriptengines-groovy-1.1.jar /Users/gnodet/work/servicemix/smx4/features/trunk/assembly/target/apache-servicemix-4.0-m2-SNAPSHOT/system/com/google/code/scriptengines/scriptengines-groovy/1.1 ... ERROR CommandLineExecutionFailed: org.apache.geronimo.gshell.command.CommandException: java.lang.StackOverflowError {code} And here's the stack trace: {code} org.apache.geronimo.gshell.commandline.CommandLineExecutionFailed: org.apache.geronimo.gshell.command.CommandException: java.lang.StackOverflowError at org.apache.geronimo.gshell.parser.visitor.ExecutingVisitor.executePiped(ExecutingVisitor.java:246) at org.apache.geronimo.gshell.parser.visitor.ExecutingVisitor.visit(ExecutingVisitor.java:107) at org.apache.geronimo.gshell.parser.ASTExpression.jjtAccept(ASTExpression.java:17) at org.apache.geronimo.gshell.parser.SimpleNode.childrenAccept(SimpleNode.java:61) at org.apache.geronimo.gshell.parser.visitor.ExecutingVisitor.visit(ExecutingVisitor.java:90) at org.apache.geronimo.gshell.parser.ASTCommandLine.jjtAccept(ASTCommandLine.java:17) at org.apache.geronimo.gshell.wisdom.shell.CommandLineBuilderImpl$1.execute(CommandLineBuilderImpl.java:96) at org.apache.geronimo.gshell.wisdom.shell.CommandLineExecutorImpl.execute(CommandLineExecutorImpl.java:71) at org.apache.geronimo.gshell.wisdom.shell.ShellImpl.execute(ShellImpl.java:172) at org.apache.geronimo.gshell.wisdom.shell.ShellImpl$2.execute(ShellImpl.java:208) at org.apache.geronimo.gshell.console.Console.work(Console.java:187) at org.apache.geronimo.gshell.console.Console.run(Console.java:128) at org.apache.geronimo.gshell.wisdom.shell.ShellImpl.run(ShellImpl.java:252) at org.apache.servicemix.kernel.gshell.core.ShellWrapper.run(ShellWrapper.java:81) at org.apache.servicemix.kernel.gshell.core.LocalConsole.run(LocalConsole.java:125) at java.lang.Thread.run(Thread.java:637) Caused by: org.apache.geronimo.gshell.command.CommandException: java.lang.StackOverflowError at org.apache.geronimo.gshell.wisdom.shell.CommandLineExecutorImpl.doExecute(CommandLineExecutorImpl.java:148) at org.apache.geronimo.gshell.wisdom.shell.CommandLineExecutorImpl.execute(CommandLineExecutorImpl.java:106) at org.apache.geronimo.gshell.parser.visitor.ExecutingVisitor$1.run(ExecutingVisitor.java:208) at org.apache.geronimo.gshell.parser.visitor.ExecutingVisitor.executePiped(ExecutingVisitor.java:231) ... 15 more Caused by: java.lang.StackOverflowError at org.apache.commons.vfs.cache.SoftRefFilesCache.getFile(SoftRefFilesCache.java:167) at org.apache.commons.vfs.provider.AbstractFileSystem.getFileFromCache(AbstractFileSystem.java:190) at org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:283) at org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:267) at org.apache.commons.vfs.provider.AbstractFileObject.resolveFile(AbstractFileObject.java:625) at org.apache.commons.vfs.provider.AbstractFileObject.resolveFiles(AbstractFileObject.java:617) at org.apache.commons.vfs.provider.AbstractFileObject.getChildren(AbstractFileObject.java:533) at org.apache.commons.vfs.provider.AbstractFileObject.traverse(AbstractFileObject.java:1502) at org.apache.commons.vfs.provider.AbstractFileObject.findFiles(AbstractFileObject.java:1473) at org.apache.commons.vfs.provider.AbstractFileObject.findFiles(AbstractFileObject.java:1030) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:93) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) at org.apache.geronimo.gshell.commands.shell.FindAction.find(FindAction.java:100) {code} ReplicationMonitor should schedule both replication and deletion work in one iteration The fix to HADOOP-5034 should make ReplicationMonitor to schedule both replication and deletion work in one iteration. The change was in the first submitted patch but got lost in the committed patch. bin/couchdb does not honour -o and -e when running in foreground / without -d (matters on macosx) When couchdb is running in foreground, then the option flags -o and -e are not honoured. This probably quite ok on most systems because foregroud there equals roughly to development mode. On the mac this is different, because fork and daemoinze calls are deprecated, services are managed using launchd, and launchd watches these processes directly. It depends on these processes not (double-)forking and intentionally dying. So in the current state of things the provided plist starts couchdb, quite consequentially does not use -o/e and. That works, but has the effect that couchdb writes to stdout, which launchd apparently forwards to syslog. So the syslog ends up being hogged by access-log level messages. And it is not possible to avoid that by simply editing the quasi config file .plist. I will attach a patch which changes bin/couchdb, .plist, and etc/Makefile.am to make this work more like I reckon it should. MultiFileHierarchicalConfiguration does not create url correctly and should be able to ignore missing files. 1. MultiFileHierarchicalConfiguration does not incorporate the basepath when creating the file url. 2. If the file pattern results in a non-existent file an exception is thrown. When used in conjunction with another file in a DynamicCombinedConfiguration however, an exception should not be thrown. Instead an empty configuration should be used. Outline view does not respect the settings of the Entry Editor on displaying or not the operational attributes Outline view does not respect the settings of the Entry Editor on displaying or not the operational attributes. Here's how to reproduce the bug: - Make sure the "Show Operational Attributes" action is checked in the Entry Editor's menu - Display any entry - All the attributes (including operational attributes) are loaded and displayed in the Entry Editor and Outline view - Un-check the "Show Operational Attributes" action in the Entry Editor's menu - The operational attributes disappear from the Entry Editor but not from the Outline view TestMergeTable is broken in Hudson http://hudson.zones.apache.org/hudson/job/HBase-Patch/539/testReport/org.apache.hadoop.hbase/TestMergeTable/testMergeTable/ java.io.IOException: Files have same sequenceid at org.apache.hadoop.hbase.regionserver.HRegion.merge(HRegion.java:2500) at org.apache.hadoop.hbase.regionserver.HRegion.mergeAdjacent(HRegion.java:2412) at org.apache.hadoop.hbase.HMerge$Merger.merge(HMerge.java:167) at org.apache.hadoop.hbase.HMerge$Merger.process(HMerge.java:126) at org.apache.hadoop.hbase.HMerge.merge(HMerge.java:91) at org.apache.hadoop.hbase.TestMergeTable.testMergeTable(TestMergeTable.java:35) viewing requests from timetable not using continuations When clicking on a request from the timetable, you get redirected back to the main page because viewRequestInfo is not an entry mode and the link is not using a continuation. Cant start synapse using synapse war when trying to start synapse using synapse.war there is a fatal massage coming synapse home is not set locale retrieval from PortletRequestImpl throws NoSuchElement Exception when a portlet is trying to retrieve the locales and compare them with prefferedLocale and/or add them, the Enumeration that is used to retrieve the elements checks for "hasNextElement" but then retrieves "nextElement" twice. I'm attaching a patch for th simple fix: Index: pluto-container/src/main/java/org/apache/pluto/container/impl/PortletRequestImpl.java =================================================================== --- pluto-container/src/main/java/org/apache/pluto/container/impl/PortletRequestImpl.java (revision 757432) +++ pluto-container/src/main/java/org/apache/pluto/container/impl/PortletRequestImpl.java (working copy) @@ -349,7 +349,7 @@ Locale locale = (Locale)e.nextElement(); if (!locale.equals(preferredLocale)) { - locales.add((Locale)e.nextElement()); + locales.add(locale); } } return Collections.enumeration(locales); @@ -518,7 +518,7 @@ Locale locale = (Locale)e.nextElement(); if (!locale.equals(preferredLocale)) { - locales.add(e.nextElement().toString()); + locales.add(locale.toString()); } } return Collections.enumeration(locales); BinaryProtocol missing method implementations write_struct_begin and read_struct_begin aren't implemented in BinaryProtocol, which they should be since the Protocol class no longer provides default implementations for them. Ruby unit tests fail due to change in BinaryProtocolAccelerated interface It looks like the unit tests exercise BinaryProtocolAccelerated in the old way still, using encode_binary and decode_binary, rather than the standard interface methods, post THRIFT-248. DataNodeCluster should create blocks with the same generation stamp as the blocks created in CreateEditsLog HADOOP-5384 makes DataNodeCluster to create blocks with generation stamp Block#GRANDFATHER_GENERATION_STAMP(0) so simuated datanodes do not crash NameNode any more. But there is still a problem. CreateEditLogs creates blocks with generation stamp GenerationStamp#FIRST_VALID_STAMP (1000). Because of the generation stamp mismatch, all injected blocks are marked as invalid when NameNode processes block reports. Erlang atoms must always start with lower-case character Field names in erlang records (structs) need to start with a lower-case character to be valid Erlang. If the .thrift file uses an upper-case character, the compiler should lower-case it. 64-bit integer and double types incorrectly serialized on 32-bit platforms The type currently being used for the 64-bit integer type is Int. On 32-bit platforms, this type only contains 32 bits and so 64-bit integers are being truncated before they're sent over the wire. In addition, serialization for the double type is being offloaded to the 64-bit integer type, and is therefore being truncated as well. thrift.el doesn't syntax highlight single line comments correctly in xemacs single line (// style) comments end up syntax highlighting like /* in my version of xemacs. The following patch fixes the issue for me: {code} diff --git a/thrift.el b/thrift.el index 736b720..3b9026b 100644 --- a/thrift.el +++ b/thrift.el @@ -87,7 +87,7 @@ (defvar thrift-mode-syntax-table (let ((thrift-mode-syntax-table (make-syntax-table))) (modify-syntax-entry ?_ "w" thrift-mode-syntax-table) - (modify-syntax-entry ?/ ". 124b" thrift-mode-syntax-table) + (modify-syntax-entry ?/ ". 1456" thrift-mode-syntax-table) (modify-syntax-entry ?* ". 23" thrift-mode-syntax-table) (modify-syntax-entry ?\n "> b" thrift-mode-syntax-table) thrift-mode-syntax-table) {code} Invalid code when Enum is in another package Thrift file 1 {code} namespace java net.recaptcha.foo.bar struct ChunkInfoRequest { 1: chunkheader.CaptchaImageType imageType } {code} Thrift file 2 {code} namespace java net.recaptcha.foo.blah enum CaptchaImageType { FOO = 1 } {code} When the java code is generated, the ChunkInfoRequest.validate method doesn't fully qualify the type name, leading to a compile error TestRecoveryManager fails wtih FileAlreadyExistsException TestRecoveryManager always fails when I run core tests in a linux redhat machine. It does not fail on a Mac machine. Testcase: testRecoveryManager took 55.842 sec Caused an ERROR Output directory file:XX/build/test/data/test-recovery-manager/output1 already exists org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:XX/build/test/data/test-recovery-manager/output1 already exists at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:111) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:772) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) at org.apache.hadoop.mapred.TestRecoveryManager.testRecoveryManager(TestRecoveryManager.java:196) java.lang.ArrayIndexOutOfBoundsException may occur when a relation field is annotated as a primary key and a foreign key <openjpa-1.2.0-SNAPSHOT-rexported nonfatal general error> org.apache.openjpa.persistence.PersistenceException: 0 at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:196) at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:142) at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:192) at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:145) .... Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.openjpa.jdbc.sql.DBDictionary.getForeignKeyConstraintSQL(DBDictionary.java:3373) at org.apache.openjpa.jdbc.sql.DBDictionary.getAddForeignKeySQL(DBDictionary.java:3252) at org.apache.openjpa.jdbc.schema.SchemaTool.addForeignKey(SchemaTool.java:1066) at org.apache.openjpa.jdbc.schema.SchemaTool.add(SchemaTool.java:604) at org.apache.openjpa.jdbc.schema.SchemaTool.add(SchemaTool.java:344) at org.apache.openjpa.jdbc.schema.SchemaTool.run(SchemaTool.java:321) at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:501) at org.apache.openjpa.jdbc.meta.MappingTool.record(MappingTool.java:453) at org.apache.openjpa.jdbc.kernel.JDBCBrokerFactory.synchronizeMappings(JDBCBrokerFactory.java:159) at org.apache.openjpa.jdbc.kernel.JDBCBrokerFactory.newBrokerImpl(JDBCBrokerFactory.java:119) at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:189) Two minor problems in TestOverReplicatedBlocks '- There is no apache license header. - It uses a deprecated API, FSNamesystem.getFSNamesystem(). Memory leak in fastbinary When a struct has a map member, fastbinary.binary_encode leaks memory. Use testmem.tar.gz app to recreate the problem. {code} # tar xzf testmem.tar.gz # cd testmem # ./testmem fast {code} As this is running, use top or pidstat to view the memory consumed by the process. Hit CTRL-C to stop. On my system it steadily increases. To compare the results using the 'default' encoding technique (using pure Python TBinaryProtocol), run testmem as follows: {code} # ./testmem default {code} Note (again, using top, pidstat, etc.) that the memory used is constant. It appears that the 'map' member triggers this. E.g. string, int members do not cause the leak. Tuscany Widget JavaScript functions should be in it's own namespace Current implementation of Widget define new functions and data structures in default namespace which can collide with customer application declared functions. We should define these functions and data structure in it's own namespace (e.g tuscany.sca.Reference) Axis2FlexibleMEPClient removal of addressing headers should be configurable I've tried using synapse in proxy mode as well as non-proxy mode but here is the scenario and why removing the headers is wrong for what we're doing: 1) A WS-Security message comes in to synapse (with wsa:MessageID signed & referenced in the digital signature ) 2) Synapse Axis2FlexibleMEPClient removes the wsa:MessageID in the original message (Axis2FlexibleMEPClient.removeAddressingHeaders) 3) The endpoint gets the 'forwarded' request and it fails ws-security validation because the wsa:MessageID which is referenced in the digital signature has been removed. The removal of addressing headers needs to be configurable since some implementations might rely on the original wsa:messageID to be there. In our case it is part of a digital signature. Issue when security different security policies are applied to a proxy service and an associated endpoint Scenario Client --- secure (policy A) --> proxy service -- secure (policy B) --> real service Problem : Wrong policy is picked in the return path. low load: ERROR ClientHandler Unexpected HTTP protocol error: Request already submitted I'm seen http connection related exceptions when under _very_ low load. what do I do: I proxy a soap interface of a backend, using wso2. I issue a soap command using www.soapui.org. After 1,2,3 commands (issue manually) the bus gives internal excpetions. what do I get: from ESB logging. 2008-09-16 15:33:29,157 [127.0.0.1-vloeki_v01] [HttpServerWorker-9] ERROR ClientHandler Unexpected HTTP protocol error: Request already submitted org.apache.http.HttpException: Request already submitted at org.apache.http.impl.nio.DefaultNHttpClientConnection.submitRequest(DefaultNHttpClientConnection.java:203) reference: more details & logs at: http://wso2.org/mailarchive/esb-java-user/2008-September/000812.html workaround: When not running the ESB and the backend on the same machine, (or better said, when routing the traffic between ESB and backend trough an other machine). The problem went away.. see http://wso2.org/mailarchive/esb-java-user/2008-September/000840.html. This obviously isn't a real fix. This issue might be related to: https://issues.apache.org/jira/browse/SYNAPSE-344 https://issues.apache.org/jira/browse/SYNAPSE-341 https://issues.apache.org/jira/browse/HTTPCORE-170 however, I'm seeing this under very low loads, very reproducable. (I _never_ can more that 3 commands in 4 seconds without getting this error) JMS - Sending a message with jmsTemplate102=true fails See nabble: http://www.nabble.com/Apache-Camel-2.0-M1-java.lang.ClassCastException%3A-org.apache.camel.component.jms.JmsConfiguration%24CamelJmsTeemplate102-td22665483s22882.html The ClassCastException is because of the old JMS API is used. Camel should test whether the template is 1.1 or 1.0.2 and cast to correct type. FlateFilter: endless loop because of missing length check (for encrypted pdfs) if mayRead is set to zero than following statement is executed endlessly: while ((amountRead = decompressor.read(buffer, 0, Math.min(mayRead,BUFFER_SIZE))) != -1) { result.write(buffer, 0, amountRead); } we just have to check that mayRead > 0 zero. A request route with a topic node incurs a 20 second wait and refers to the wrong MEP. If a route contains a node that publishes to a topic, the route is incorrectly suspended for a default 20 seconds at the topic node. Further, JmsProducer.java checks the MEP of the original request Exchange and not the endpoint of the topic. For example, say I have a route built like this: {code} from("activemq:queue:request"). to("generate_news"). to("activemq:topic:news"). to("do_something_else"); {code} The original request is expecting a reply. However, after the "news" is pumped into the news topic, there is a default 20 second wait (requestTimeout). This wait always results in the exception: "The OUT message was not received within: 20000 millis on the exchange..." After reading the JmsProducer code, I changed the route to the following: {code} from("activemq:queue:request"). to("generate_news"). to("activemq:topic:news?exchangePattern=InOnly"). to("do_something_else"); {code} This reveals the root of the bug, which is in the first few lines of method org.apache.camel.component.jms.JmsProducer.process(Exchange): {code}// public void process(final Exchange exchange) { final org.apache.camel.Message in = exchange.getIn(); if (exchange.getPattern().isOutCapable()) { {code} The above if statement checks the MEP of the original request's Exchange and not the new endpoint of the news topic. This makes the above "?exchangePattern=InOnly" configuration useless, because the original request MEP is InOut. The result is that after that 20 second time-out, the temporary queue for the original request has expired, so the whole request failed. Note that the next node "do_something_else" is never reached due to the time-out exception. memory issue in ExcelExtractor The excel extractor consumes lots and lots of memory when given an excel file containing a lot of numeric cells. I tested using a simple sheet containing 254 columns and 5511 rows resulting in an 8MB big file, this blowed with an OOME when given 512MB. The memory issue is caused by the java NumberFormat that is instantiated for every numeric cell. A solution would be to cache the NumberFormat instance in the TikaHSSFListener class. Since NumberFormat is not thread-safe, it might be necessary to pool it. [i18n] Single solution not working properly and not otpimized html content directly under body node not parsed correctly The html parser does not correctly parse content that is directly under the body node, when passing html like this: <html><body>This is my content</body></html>, an empty string is returned when calling BodyContentHandler#toString() Special characters in HTML file are not parsed correctly Words containing , characters are not parsed correctly if present in HTML document. Please refer to discussion: http://markmail.org/message/jgwzbw63o67amqu3 clustered qpidd segfaults when java client connects Fix set and number issues in generated constant code The compiler generates incorrect code for set constants. First, the function "mkSet" (which does not exist) is being called on a list of constants to make the set. "Set.fromList" should be used in place of "mkSet". Second, no commas appear in the generated list, so all the values are getting smushed together, which is likely to cause syntax errors in the generated code if the constants are anything but numbers. Third, no type annotations were being generated for the constants, which means that number literals were being assigned a type of Integer rather than Int or Int64. SOAPHandlerInterceptorTest.java inconsistency In my current environment, this test puts an XMLStreamReader on a message for content, but the interceptor goes looking for a DOM node, and fails to find one. This is probably my mistake in some horrifically indirect fashion, but I'm looking for help. [inlineadministration] Incorrect URL generated for admin popups Instead of /administration/etc/etc/etc the current element context url is used, like for example /content/etc/etc . This is a conseguence of the refactoring of the URL system in the context, but it's strange because it should go thru root. Currently there are test cases where using root().handleThis().doThat() returns the correct URL, while inline administration uses RootWebHandler.getInstance().handleThis(), anyway it should be the call to handleThis() that, being executed on RootWebHandler, resets the url system. Negative number of maps in cluster summary I observed -ve number of maps in cluster summary, when running MRReliability test. (job with large number of failures) wsdl2java failes with NPE with void Async Methods If a method which ends in Async returns void, and therefore an operation with a name that ends in Async and no output, then running wsdl2java will result in an NPE. [exec] wsdl2java -compile -d /home/bkearney/workspace/testclient/./src/main/java -p com.redhat.vdc.client.generated -verbose -classdir /home/bkearney/workspace/testclient/./target/classes http://localhost:8080/backends/wcf?wsdl [exec] wsdl2java - Apache CXF 2.1.4 [exec] [exec] Failed to invoke WSDLToJava [exec] org.apache.cxf.tools.common.ToolException: java.lang.NullPointerException [exec] at org.apache.cxf.tools.wsdlto.WSDLToJavaContainer.execute(WSDLToJavaContainer.java:240) [exec] at org.apache.cxf.tools.common.toolspec.ToolRunner.runTool(ToolRunner.java:83) [exec] at org.apache.cxf.tools.wsdlto.WSDLToJava.run(WSDLToJava.java:103) [exec] at org.jboss.wsf.stack.cxf.tools.CXFConsumerImpl.consume(CXFConsumerImpl.java:224) [exec] at org.jboss.wsf.spi.tools.cmd.WSConsume.importServices(WSConsume.java:222) [exec] at org.jboss.wsf.spi.tools.cmd.WSConsume.main(WSConsume.java:80) [exec] Caused by: java.lang.NullPointerException [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.internal.OperationProcessor.isAsyncMethod(OperationProcessor.java:182) [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.internal.OperationProcessor.processMethod(OperationProcessor.java:76) [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.internal.OperationProcessor.process(OperationProcessor.java:63) [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.internal.PortTypeProcessor.process(PortTypeProcessor.java:143) [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.WSDLToJavaProcessor.wsdlDefinitionToJavaModel(WSDLToJavaProcessor.java:88) [exec] at org.apache.cxf.tools.wsdlto.frontend.jaxws.processor.WSDLToJavaProcessor.process(WSDLToJavaProcessor.java:60) [exec] at org.apache.cxf.tools.wsdlto.WSDLToJavaContainer.execute(WSDLToJavaContainer.java:197) [exec] at org.apache.cxf.tools.wsdlto.WSDLToJavaContainer.execute(WSDLToJavaContainer.java:232) [exec] ... 5 more DynamicClientFactory.createClient API would throw an exception with source path instead of classes path when classes.mkdir() call fails https://svn.apache.org/repos/asf/cxf/trunk/rt/databinding/jaxb/src/main/java/org/apache/cxf/endpoint/dynamic/DynamicClientFactory.java The following code is not using the classes path in the exception message when classes.mkdir fails. File classes = new File(tmpdir, stem + "-classes"); if (!classes.mkdir()) { throw new IllegalStateException("Unable to create working directory " + src.getPath());// change this to classes.getPath() } Let me know if I need to submit a patch. Hope this is a simple fix to make. Ensure getPathInfo returns correct (expected) value According to the servlet specification, a servlet registered with path "/" in its web application is considered a default servlet. For such a default servlet the HttpServletRequest.getServletPath() and HttpServletRequest.getPathInfo() methods return different results as would be expected: getServletPath() returns the request URI path (minus the servlet context path) and getPathInfo() always returns null. When Sling is deployed using the PAX Web 0.5.1 this situation happens since the SlingMainServlet is registered with path "/" with the HttpService. To fix this situation for Sling and ensuring the expected getServletPath() (of minor use in Sling) and getPathInfo() (very important for all authentication as well as resource resolution), the Sling engine is modified as follows: * The SlingMainServlet is always registered with the HttpService with the servlet path "/". This cannot be configurable. * The SlingHttpServletRequestImpl, which is instantiated by the SlingMainServlet to provide the SlingHttpServletRequest interface, is modified to overwrite the getServletPath() and getPathInfo() methods as follows (see also Section SRV.4.4 in Servlet API 2.4 spec) : * getServletPath() always returns "" * getPathInfo() always returns the getServletPath()+getPathInfo() called on the servlet container (or HttpService provided) HttpServletRequest object More information on this issue is available in the Mail Thread http://markmail.org/message/34cithemwzadolw3 SimpleScriptContext.java cannot be compiled with Java 1.4 While attempting to create the BSF3.0 beta2 distribution from ant elder's sources with Java 1.4, the building stops with an error stating: ------------- cut here ---------- F:\download\Apache\bsf\bsf3\bsf-3.0-beta2-src\bsf-api\src\main\java\javax\script\SimpleScriptContext.java:46: cannot resolve symbol symbol : method valueOf (int) location: class java.lang.Integer private static final List SCOPES = Arrays.asList(new Integer[] { Integer.valueOf(ENGINE_SCOPE), Integer.valueOf(GLOBAL_SCOPE) }); ^ F:\download\Apache\bsf\bsf3\bsf-3.0-beta2-src\bsf-api\src\main\java\javax\script\SimpleScriptContext.java:46: cannot resolve symbol symbol : method valueOf (int) location: class java.lang.Integer private static final List SCOPES = Arrays.asList(new Integer[] { Integer.valueOf(ENGINE_SCOPE), Integer.valueOf(GLOBAL_SCOPE) }); ^ 2 errors ------------- cut here ---------- Indeed, "Integer.valueOf(int)" got introduced with Java 1.5. As BSF3 should be deployable with Java 1.4, this would need to be fixed. Here is a unified diff which corrects the problem: ------------- cut here ---------- --- bkp\SimpleScriptContext.java 2007-11-01 16:48:10.000000000 +0100 +++ SimpleScriptContext.java 2007-11-01 20:35:07.906250000 +0100 @@ -43,7 +43,7 @@ private Writer errorWriter; - private static final List SCOPES = Arrays.asList(new Integer[] { Integer.valueOf(ENGINE_SCOPE), Integer.valueOf(GLOBAL_SCOPE) }); + private static final List SCOPES = Arrays.asList(new Integer[] { Integer.valueOf(""+ENGINE_SCOPE), Integer.valueOf(""+GLOBAL_SCOPE) }); public SimpleScriptContext() { reader = new InputStreamReader(System.in); ------------- cut here ---------- Regards, ---rony Adding multiple server with the same host in monitoring portlet results in a sql error Adding multiple server with same host in monitoring portlet results in a sql error. It seems the IP attribute in Servers table is defined to be unique. Kerberos auth support for the java client Currently the 0-8 java client only supports PLAIN and cram-MD5 as authentication mechanisms. The 0-10 java client only uses PLAIN. It would be good to add Keberos as an authentication mechanism to the java client. A JavaScript error for non-required fields will force Ajax form submits to be handled as a full-page request instead Here a simple test case, in the template: <div t:type="form" t:zone="valueZone" t:clientValidation="true"> <span t:type="textfield" t:value="value"/> <div t:type="submit" value="submit" class="button"/> <div t:type="zone" t:id="valueZone"> <t:outputraw t:id="outputComponent" value="value"/> </div> </div> the class: @Property private Long value; @Component private OutputRaw outputComponent; Object onSuccess() { return outputComponent; } when you submit the form you'll see that a full page refresh is done when it shouldn't because the zone form parameter is set. If you change the value field in the class from Long to String the form submit will then perform an ajax request as expected. Also, if i turn off client side validation on the form it works with numeric values. dbloopup gives com.mysql.jdbc.CommunicationsException after idling a night I have a wso2 sequence with dblookup/dbreport mediators. Basically every soap call is stored in the database. Every morning, the wso2esb bus is in a state it doesn't process any messages anymore due to database access problems. 2008-09-28 20:14:33,480 [127.0.0.1-vloeki_v01] [HttpServerWorker-3] ERROR DBLookupMediator Error executing statement : select * from customer c where token = '00000000-00'; against DataSource :jdbc:mysql://localhost:3306/esb com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** java.net.SocketException MESSAGE: Broken pipe During the night the bus is typically idle (zero messages). The database is _not_ down. mysql query browser is working just fine. full report & error on http://wso2.org/mailarchive/esb-java-user/2008-September/000897.html hadoop command uses large JVM heap size Command used to determine JAVA_PLATFORM in bin/hadoop command does not set the heap size. The command uses default 1GB heap size. The tasks invoking hadoop command end up using large heap size in streaming jobs. If the maximum memory that can be used by a task is restricted, this could result in map/reduce job failures. LdapDN.startsWith(javax.naming.Name) fails for javax.naming.ldap.LdapName assertTrue("Starting DN fails with ADS LdapDN", new LdapDN("ou=foo,dc=apache,dc=org").startsWith(new LdapDN("dc=apache,dc=org"))); assertTrue("Starting DN fails with Java LdapName", new LdapDN("ou=foo,dc=apache,dc=org").startsWith(new LdapName("dc=apache,dc=org"))); ==> second assertion fails, calculates internally wrong offset for RDNs. MemoryBuffer > 4096 bytes will truncate remaining bytes There is a bug in the thrift_native implementation of MemoryBuffer. When the read index is >= 4096, we're supposed to chop that 4096 bytes off the buffer to save memory. Instead, it incorrectly nils out the entire buffer, effectively truncating the remaining buffer. fastbinary fails if field has another type If readed field type mismatchs spec field type fastbinary throws error instead of skip one. The TBinaryProtocol skips field. TServerSocket close() method - inheritance problem I was unable to close transport layer (TServerSocket). I have examined library and I have found little problem (my opinion) in TServerSocket inheritance. TServerSocket inherits from TServerTransportBase and TSocketBase. close() method is inherited from TServerTransportBase and this method is just dummy method. Real method (which closes the socket) in TSocketBase is hidden. Throw more significant exception when a ListAttribute is not found in AddAttributeModel If AddAttributeModel class does not find a ListAttribute, it throws a NPE with a not-so-significant message. A better message should be thrown. The ls shell command documentation is out-dated Current ls output is {noformat} bash-3.2$ ./bin/hadoop fs -ls Found 1 items -rw-r--r-- 3 tsz supergroup 1366 2008-11-24 16:58 /user/tsz/r.txt {noformat} but the doc says "dirname <dir> modification_time modification_time permissions userid groupid". See http://hadoop.apache.org/core/docs/r0.18.2/hdfs_shell.html#ls TBinaryProtocol blow up the data size between map-reduce boundary TBinaryProtocol is very space-inefficient. We've seen the data blown up several times between the map-reduce boundary because of it. We should change it to simple delimited format (backed by LazySimpleSerDe). LazySimpleSerDe should support multi-level nested array, map, struct types Once we do that, we can completely deprecate DynamicSerDe/TCTLSeparatedProtocol, and close any bugs that DynamicSerDe/TCTLSeparatedProtocol has. Add JSON support for literals (often, inline function definitions) that are used to configure some client-side objects (even if they aren't truly JSON) the JSONObject config in mixin Autocomplete surrounds parameters with double qoute. so Ajax.Autocompleter dont know how to interpreting the callbacks like "afterUpdateElement" and throws an exception "function not found". Changing row selection in the table from multiple to single does not update the row selection set When the table is changed from multiple to single selection, and the selection is kept in the rows selected in the multiple mode, the single selection will not be applied method AnnotationHandlerChainBuilder.patternMatches() causes CXF portability issues with other JAX-WS stacks If I have config like this: <handler-chains xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns1="http://org.jboss.ws/jaxws/samples/logicalhandler" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee javaee_web_services_1_2.xsd"> <handler-chain> <port-name-pattern>ns1:SOAPEndpoint*</port-name-pattern> <handler> <handler-name> PortClientHandler </handler-name> <handler-class> org.jboss.test.ws.jaxws.samples.logicalhandler.PortHandler </handler-class> </handler> </handler-chain> </handler-chains> the port-name-pattern matches e.g. SOAPEndpointDocPort on JBoss, Glassfish, Websphere but not on Geronimo. The relevant piece of code is: private boolean patternMatches(Element el, QName comp) { ... if (localPart.contains("*")) { //wildcard pattern matching return Pattern.matches(localPart, comp.getLocalPart()); // PROBLEMATIC PART HERE } else if (!localPart.equals(comp.getLocalPart())) { return false; } return true; } The problem is: Pattern.matches("SOAPEndpoint*", "SOAPEndpointDocPort") returns false // CXF expects. SOAPEndpoint.* as wildcard which is not portable cross different APP servers Consider fix that will match it, please. handler chain wildcard matching does not quite work In AnnotationHandlerChainBuilder the following wildcard check is done: if (localPart.contains("*")) { //wildcard pattern matching return Pattern.matches(localPart, comp.getLocalPart()); ... So, for example if localPart is "foo*", this check will only return true if comp.getLocalPart() returns "foo" followed by any number of o's but will return false on anything else even if the string starts with "foo". According to the spec the "foo*" should match any string starting with "foo", e.g. "fooBar", "fooCXF", etc. Looks like the "*" in the localPart need to be first converted into appropriate regex - e.g. localPart = localPart.replace("*", ".*"); Type alternatives implementation seems to be broken I am running the latest Xerces-J development code from SVN repository. I tried the following sample XML document, and the XML Schema document. XML document: <PERSON xsi:noNamespaceSchemaLocation="person.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <FNAME>Mukul</FNAME> <LNAME>Gandhi</LNAME> <DOB>1999-06-02</DOB> <ADDRESS type="short">my address</ADDRESS> </PERSON> Schema: <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="PERSON" type="PersonType" /> <xs:complexType name="PersonType"> <xs:sequence> <xs:element name="FNAME" type="xs:string" /> <xs:element name="LNAME" type="xs:string" /> <xs:element name="DOB" type="xs:date" /> <xs:element name="ADDRESS" type="LongAddress"> <xs:alternative test="@type='short'" type="ShortAddress"/> <xs:alternative test="@type='long'" type="LongAddress"/> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="ShortAddress"> <xs:simpleContent> <xs:extension base="shortString"> <xs:attribute name="type" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> <xs:simpleType name="shortString"> <xs:restriction base="xs:string"> <xs:minLength value="1" /> <xs:maxLength value="50" /> </xs:restriction> </xs:simpleType> <xs:complexType name="LongAddress"> <xs:sequence> <xs:element name="street1" type="xs:string" /> <xs:element name="street2" type="xs:string" minOccurs="0" /> <xs:element name="city" type="xs:string" /> <xs:element name="state" type="xs:string" /> <xs:element name="pin" type="xs:string" /> <xs:element name="country" type="xs:string" /> </xs:sequence> <xs:attribute name="type" type="xs:string"/> </xs:complexType> </xs:schema> When I apply validation using the above examples, with Xerces, I get following errors: [Error] person.xsd:11:69: s4s-elt-must-match.1: The content of 'ADDRESS' must match (annotation?, (simpleType | complexType)?, alternative*, (unique | key | keyref)*)). A problem was found starting at: alternative. [Error] person.xml:5:45: cvc-complex-type.2.3: Element 'ADDRESS' cannot have character [children], because the type's content type is element-only. [Error] person.xml:5:45: cvc-complex-type.2.4.b: The content of element 'ADDRESS' is not complete. One of '{st reet1}' is expected. It seems to me, that type alternative support in Xerces is not working as expected. Some comments about the Java class, org.apache.xerces.impl.xpath.XPath20 (which I think is used by type alternatives) The method, public boolean evaluateNodeTest(QName element, XMLAttributes attributes) doesn't use the parameter, "element" anywhere in the method body. Please correct me, on any of the points I have mentioned. HiveHistory: TestCLiDriver fails if there are test cases with no tasks TestCLIDriver Fails for some test cases. InOnly exchange does not rollback jms message on exchange error This is related to SMXCOMP-474. It seems that in-out MEPs are now rolling back to the JMS queue on errors, but in-only MEPs do not. Need to restore this to the same functionality as prior to the regression mentioned in SMXCOMP-474. [hive] extra rows for count distinct select count(distinct a) from T returns dummy rows from all reducers if number of reducers are more than 1 NPE in JobTracker.getTasksToSave() method Reduce Task Progress shows > 100% when the total size of map outputs (for a single reducer) is high When the total map outputs size (reduce input size) is high, the reported progress is greater than 100%. [classlib][archive] org.apache.harmony.archive.tests.java.util.jar.ManifestTest fails java.lang.NullPointerException at java.io.InputStream.read(InputStream.java:148) at java.io.InputStream.read(InputStream.java:121) at org.apache.harmony.luni.util.InputStreamExposer.expose(InputStreamExposer.java:118) at java.util.jar.Manifest.read(Manifest.java:208) at org.apache.harmony.archive.tests.java.util.jar.ManifestTest.testRead(ManifestTest.java:420) at java.lang.reflect.VMReflection.invokeMethod(VMReflection.java) XPath selector fails to select messages correctly when the expression evaluates to 'true' Currently the XalanXPathEvaluator class will return 'true' or 'false' based on the existence of a node in the XML document that matches the XPath expression. However, XPath expressions themselves can return values of true or false based on a comparison criteria. For example, for an input message: {code} <root> <a key='first' num='1'/> <b key='second' num='2'>b</b> </root> {code} A consumer using an XPath selector with an expression such as: {code} XPATH '/root/b="b"' {code} should successfully select and consume the message above. The evaluator today would attempt to retrieve a XML node with that expression, but it would fail and return false since the returned value of the expression is a Boolean. The XPath Selector should be able to handle Boolean expressions. Import PSML failed using Internet Explorer 6 Importing of PSML and zip failed, while using Internet Explorer 6 get-property function is not recognized in xquery variable expression attribute I'm transforming response of the remote service in outSequence of a proxy. I'm using xquery mediator. I want to change the format of the payload of the response and also I want to insert a payload of the request into a first element of the response. To do that, I'm storing the request A payload in a variable in inSequence and in outSequence I'm want to use it in XQUERY mediator while transforming the response. 1. proxy: gets request A, 2. inSequence: stores the payload A in a property named "body" (it will be used in outSeqence), 2. inSequence: transforms request A to request B, 3. endpoint: sends request B to remote service, 4. endpoint: receives response B, 5. outSequence: transforms response B to response A with request A included. Here's a fragments of the synapse.xml: Storing the payload contents in the "body" property: <property name="body" expression="$body/child::*[position()=1]" /> Transforming response to other format and passing request payload as a argument: <xquery key="response.xq" trace="enable"> <variable name="payload" type="ELEMENT"/> <variable name="request" expression="synapse:get-property('body')" type="ELEMENT"/> </xquery> On the runtime I've got following exception: 2009-02-13 20:42:07,359 [192.168.168.6-5MBTB3J] [HttpClientWorker-2] ERROR XQueryMediator Unable to execute the query org.apache.synapse.SynapseException: Error evaluating XPath synapse:get-property('body') on message<?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/03/addressing" xmlns:kom="http://www.eurobank.pl/serwisy/paybylink/1.0/komunikaty"><soapenv:Header> [...header contents...]</soapenv:Header>[... payload contents...] </soapenv:Body></soapenv:Envelope> at org.apache.synapse.mediators.xquery.MediatorCustomVariable.handleException(MediatorCustomVariable.java:165) at org.apache.synapse.mediators.xquery.MediatorCustomVariable.evaluate(MediatorCustomVariable.java:153) at org.apache.synapse.mediators.xquery.MediatorCustomVariable.evaluateValue(MediatorCustomVariable.java:93) at org.apache.synapse.mediators.xquery.XQueryMediator.performQuery(XQueryMediator.java:312) at org.apache.synapse.mediators.xquery.XQueryMediator.mediate(XQueryMediator.java:160) at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:61) at org.apache.synapse.config.xml.AnonymousListMediator.mediate(AnonymousListMediator.java:30) at org.apache.synapse.config.xml.SwitchCase.mediate(SwitchCase.java:65) at org.apache.synapse.mediators.filters.SwitchMediator.mediate(SwitchMediator.java:111) at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:61) at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:111) at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:140) at org.apache.synapse.core.axis2.SynapseCallbackReceiver.handleMessage(SynapseCallbackReceiver.java:312) at org.apache.synapse.core.axis2.SynapseCallbackReceiver.receive(SynapseCallbackReceiver.java:133) at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:173) at org.apache.synapse.transport.nhttp.ClientWorker.run(ClientWorker.java:210) at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:58) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) 2009-02-13 20:42:07,359 [192.168.168.6-5MBTB3J] [HttpClientWorker-2] ERROR SynapseCallbackReceiver Synapse encountered an exception, No error handlers found - [Message Dropped] Unable to execute the query After looking a bit at XQueryMediator code it seems it uses AXIOMXPath class instead of SynapseXPath class. This seems to be a reason of the exception. I also tried to used XSLT mediator with similar parameters (see below) and the property is set correctly, ( XSLTMediator uses SynapseXPath class for evaluating values of properties). <xslt key="response.xsl"> <property name="requestBody" expression="get-property('body')"/> </xslt> drop table drops all disabled tables To reproduce in the shell: create 'A' create 'B' disable 'A' disable 'B' drop 'B' enable 'A' -> exception table 'A' not found [java6][classlib][luni] BitSetTest failed because of the wrong constructor. Method test_getI(), failed at line 426: assertEquals("Test1: Wrong size,", 0, bs.size()). The reason is the difference between new BitSet() and new BitSet(0): 1. new BitSet() will create a long array with length 1 2. new BitSet(0) will create a long array with length 0 So the size() method will return 64 if we use new BitSet() or return 0 if we use new BitSet(0). I have tested the test case against RI. RI also return 64 if we use new BitSet() I found our java5 trunk use new BitSet(0) in the test case. I have no idea about why we change this in the java6 branch. Better control memory usage in contrib/index The combiner was originally designed to work only on the map side. When used on the reduce side, it may use too much memory. Support to fully plugin host-jms module In order for host-jms module to support clean plugin for different JMS listeners, these changes are needed in current Tuscany code 1) JMSBindingServiceBindingProvider.start() function should remove below line, MessageListener listener = new RRBJMSBindingListener(jmsBinding, jmsResourceFactory, service, targetBinding, messageFactory); RRBJMSBindingListener is Tuscany specific listener and this line will not allow user to plugin a different JMS listener. Instead, code should instantiate in JMSServiceListener implementation's constructor which is constructor of ASFListener. Code in JMSBindingServiceBindingProvider.start() method should be, public void start() { try { this.serviceListener = serviceListenerFactory.createJMSServiceListener(this); //pass current instance of JMSBindingServiceBindingProvider serviceListener.start(); } catch (Exception e) { throw new JMSBindingException("Error starting JMSServiceBinding", e); } } } 2) Tuscany should change JMSServiceListenerFactory.createJMSServiceListener() method declaration to below method, which just passes JMSBindingServiceBindingProvider as parameter. public JMSServiceListener createJMSServiceListener(JMSBindingServiceBindingProvider service) ; The reason for this is, current code passes serviceName, isCallbackService, jmsBinding & listener as params for JMSServiceListenerFactory which are very specific for RRBJMSBindingListener, but not useful for different JMS listener frameworks. If Tuscany passes instance of JMSBindingServiceBindingProvider it gives full flexibility for the listener frameworks to extract what they need from this class. Once above signature is modified, Tuscany can pass the JMSBindingServiceBindingProvider to JMSListener constructor and create the RRBJMSBindingListener in JMSListener contsructor as below, public ASFListener(JMSBindingServiceBindingProvider service) { this.service = service; //pass JMSBindingServiceBindingProvider instance all the way here so that every listener implementation will have full flexibility this.listener = new RRBJMSBindingListener(service.getJMSBinding(), service.getJMSResourceFactory(), service.getService(), service.getTargetBinding(), service.getMessageFactory()); ... //do whatever else needed for specific listener frameworks } 3) Add these getter methods to JMSBindingServiceBindingProvider class, so that listener frameworks can extract what they need. public JMSBinding getBinding(){ return jmsBinding; } public Binding getTargetBinding(){ return targetBinding; } public RuntimeComponentService getService(){ return this.service; } public RuntimeComponent getComponent(){ return this.component; } public MessageFactory getMessageFactory(){ return this.messageFactory; } In the osgi/list output, the spring state is only valid when the bundle is started and should not be displayed otherwise Validate Web Admin Console input - address admin console security vulnerabilities This JIRA addresses the following security vulnerabilities in the web admin console: CVE-2008-5518: Apache Geronimo web administration console directory traversal vulnerabilities. A vulnerability was found in several portlets including Services/Repository, Embedded DB/DB Manager, and Security/Keystores when running the Apache Geronimo server on Windows. This issue may allow a remote attacker to upload any file in any directory. This affects all full JavaEE Geronimo assemblies or other distributions which include the administration web console up to and including Apache Geronimo 2.1.3. An alternative workaround (if you choose to not upgrade to Apache Geronimo 2.1.4) would be to stop or undeploy the administration web console application in the server. Credit: The Apache Geronimo project would like to thank Digital Security Research Group (dsecrg.com) for responsibly reporting this issue and assisting us with validating our fixes. CVE-2009-0038: Apache Geronimo web administration console XSS vulnerabilities Various linked and stored cross-site scripting (XSS) vulnerabilities were found in the Apache Geronimo administrative console and related utilities. Using this vulnerability an attacker can steal an administrator's cookie and then authenticate as administrator or perform certain administrative actions. For example, a user can inject XSS in some URLs or in several input fields in various portlets. This affects all full JavaEE Geronimo assemblies or other distributions which include the administration web console up to and including Apache Geronimo 2.1.3. An alternative workaround (if you choose to not upgrade to Apache Geronimo 2.1.4) would be to stop or undeploy the administration web console application in the server. Credit: The Apache Geronimo project would like to thank Digital Security Research Group (dsecrg.com) and Marc Schoenefeld (Red Hat Security Response Team) for responsibly reporting this issue and assisting us with validating our fixes. CVE-2009-0039: Apache Geronimo web administration console XSRF vulnerabilities Various cross-site request forgery (XSRF or CSRF) vulnerabilities were identified in the Apache Geronimo web administration console. Exploiting these issues may allow a remote attacker to perform certain administrative actions, e.g. change web administration password, upload applications, etc... using predictable URL requests once the user has authenticated and obtained a valid session with the server. This affects all full JavaEE Geronimo assemblies or other distributions which include the administration web console up to and including Apache Geronimo 2.1.3. An alternative workaround (if you choose to not upgrade to Apache Geronimo 2.1.4) would be to stop or undeploy the administration web console application in the server. Credit: The Apache Geronimo project would like to thank Digital Security Research Group (dsecrg.com) for responsibly reporting this issue and assisting us with validating our fixes. It corrects the issues with the addition of directory checks and a servlet filter to check for XSS and XSRF vulnerabilities JMS: No useful exception thrown when message is sent to full queue In JMS, when a message is sent when a queue is already at its maximum size, nothing happens for 60 seconds. The program just hangs there. Then, suddenly, these two exceptions are thrown: org.apache.qpid.transport.SessionException: timed out waiting for session to become open (state=DETACHED) at org.apache.qpid.transport.Session.invoke(Session.java:442) at org.apache.qpid.transport.SessionInvoker.messageTransfer(SessionInvoker.java:96) at org.apache.qpid.client.BasicMessageProducer_0_10.sendMessage(BasicMessageProducer_0_10.java:160) at org.apache.qpid.client.BasicMessageProducer.sendImpl(BasicMessageProducer.java:465) at org.apache.qpid.client.BasicMessageProducer.sendImpl(BasicMessageProducer.java:420) at org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:289) at Producer.runTest(Producer.java:135) at Producer.main(Producer.java:64) Producer: Caught an Exception: javax.jms.JMSException: Exception when sending message javax.jms.JMSException: Exception when sending message at org.apache.qpid.client.BasicMessageProducer_0_10.sendMessage(BasicMessageProducer_0_10.java:173) at org.apache.qpid.client.BasicMessageProducer.sendImpl(BasicMessageProducer.java:465) at org.apache.qpid.client.BasicMessageProducer.sendImpl(BasicMessageProducer.java:420) at org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:289) at Producer.runTest(Producer.java:135) at Producer.main(Producer.java:64) This is in contrast to, for example, a Python qpid producer, which will throw an exception as soon as the queue is full, and it will be very specific about the problem, telling you the queue is too full, which message overfilled it, etc. I tested this using a hacked up version of the direct producer/consumer JMS example. Just run the producer (without the consumer running) long enough with large enough messages so that it will reach the maximum queue size, and you should be able to see it. @Oneway doesn't work with simple/bare element types If the service interface is specified to have a parameter style of bare/simple by using the @SOAPBinding annotation, a method that has been annotated with @Oneway does not behave as expected. The method still acts as a two way method in that the client is not free'd up immediately. The client waits for method to finish processing. Interface: @WebService @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) public interface HelloWorld { @WebMethod @Oneway void sayHiOneWay(String text); } Impl: @WebService(endpointInterface = "demo.hw.server.HelloWorld", serviceName = "HelloWorld") public class HelloWorldImpl implements HelloWorld { @Oneway public void sayHiOneWay(String text) { System.out.println("sayHiOneWay called"); System.out.println("sleeping for 10 secs"); try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("woke up after 10 secs"); System.out.println("accepted: " + text); } } Client: public final class Client { private static final QName SERVICE_NAME = new QName("http://server.hw.demo/", "HelloWorld"); private static final QName PORT_NAME = new QName("http://server.hw.demo/", "HelloWorldPort"); private Client() { } public static void main(String args[]) throws Exception { Service service = Service.create(SERVICE_NAME); // Endpoint Address String endpointAddress = "http://localhost:9090/Hello"; // Add a port to the Service service.addPort(PORT_NAME, SOAPBinding.SOAP11HTTP_BINDING, endpointAddress); HelloWorld hw = service.getPort(HelloWorld.class); System.out.println("Invoke sayHiOneWay()...."); long startTime = System.currentTimeMillis(); hw.sayHiOneWay(System.getProperty("user.name")); System.out.println("Time taken to call sayHiOneWay(): " + (System.currentTimeMillis()-startTime) + " ms"); } } Hard-coded messages inside tapestry.js are not localized Hard-coded messages should be extracted and tapestry.js split into two files: one that has the logic, one that has the messages. chukwa build got broken The move to being a subproject has broken the Chukwa build process. rspec bdd is broken There a multitude of bugs in the running of rspec in buildr in the current github repository. These patches fix the issues with: * mkdir_p being called without being called on FileUtils * a call is made to Error.dump_yaml with only one argument instead of two * Kernel.gem is a private method, and this patch changes the calls to it into Kernel#send calls that gets around it * for yucks, I made the backtrace that come back on errors nicer by turning it into a pretty string. (we should probably not have this stack trace and the usual rspec --trace one being returned at the same time like it does now. it's pretty confusing.) Vertical breadcrumbs illegally mix in-line and block DOM elements 1) Run the breadcrumbs demo page. 2) Change the orientation to vertical and set inline style to : background-color:red 3) Click on update. Note that in FF and safari browser the style change does not take effect. The problem is the resulting DOM: <span style="background-color: red;" class="x4v"><div>... As you can see, a DIV (block) got rendered in a SPAN (in-line). This is an illegal DOM hierarchy as the spec. clearly states that in-line elements must only ever include other in-line elements. The job instrumentation API needs to have a method for finalizeJob, The job instrumentation API needs to have a method for finalizeJob, and this function should be called in JobTracker finalizeJob. Currently the jobComplete's method on the job instrumentation class is called only for Job that succeed. hdfsproxy includes duplicate jars in tarball, source in binary tarball The binary tarball should not include hdfsproxy source. Similarly, hdfsproxy should not include its own copy of jars already in the distribution, particularly hadoop-\* jars. After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP. After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP. fs.default.name=hdfs://p520aix61.mydomain.com:9000 Hostname for box is p520aix and the IP is 10.120.16.68 If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000" is used. Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata, expected: hdfs://p520aix61.mydomain.com:9000 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320) at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84) at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122) at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667) at TestHadoopHDFS.run(TestHadoopHDFS.java:116) M2 Plugins: Need a way to define a new converter or validator tag with the existing Id If a converter or a validator references the Id already associated with the existing tag, then the base Faces plugin finds ValidatorBean or ConverterBean for the existing tag instead of creating a new bean instance. This means that we cannot create validator/converter tags in ADF Faces that reference converter/validator IDs defined in Trinidad. The proposed fix is to add new extension properties in metadata: mfp:root-converter-id and mfp:root-validator-id. If these are defined, Facelet tag generation will write out the 'root' (real) Id instead of the ID specified as validator-id or converter-id. This will allow us to use new (fake) IDs as validator-id/converter-id to distinguish the new tags, while still writing out the correct Id in the .taglib.xml. TestMTQueries is broken It has been broken for quite sometime but the build is not failing. Exception when exposing remote service using DOSGi When exposing a service using CXF-DOSGi, sometimes the following exception appears. WARNING: Initial attempt to crate application context was unsuccessful. org.springframework.beans.factory.BeanDefinitionStoreException: IOException pars ing XML document from class path resource [META-INF/cxf/cxf.xml]; nested excepti on is java.io.FileNotFoundException: class path resource [META-INF/cxf/cxf.xml] cannot be opened because it does not exist at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:349) at org.apache.cxf.bus.spring.ControlledValidationXmlBeanDefinitionReader .loadBeanDefinitions(ControlledValidationXmlBeanDefinitionReader.java:122) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.context.support.AbstractXmlApplicationContext.loa dBeanDefinitions(AbstractXmlApplicationContext.java:109) at org.apache.cxf.bus.spring.BusApplicationContext.loadBeanDefinitions(B usApplicationContext.java:263) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.apache.cxf.bus.spring.BusApplicationContext.<init>(BusApplication Context.java:91) at org.apache.cxf.bus.spring.SpringBusFactory.createApplicationContext(S pringBusFactory.java:102) at org.apache.cxf.bus.spring.SpringBusFactory.createBus(SpringBusFactory .java:93) at org.apache.cxf.bus.spring.SpringBusFactory.createBus(SpringBusFactory .java:86) at org.apache.cxf.bus.spring.SpringBusFactory.createBus(SpringBusFactory .java:64) at org.apache.cxf.bus.spring.SpringBusFactory.createBus(SpringBusFactory .java:53) at org.apache.cxf.BusFactory.getDefaultBus(BusFactory.java:69) at org.apache.cxf.BusFactory.getThreadDefaultBus(BusFactory.java:106) at org.apache.cxf.BusFactory.getThreadDefaultBus(BusFactory.java:97) at org.apache.cxf.endpoint.AbstractEndpointFactory.getBus(AbstractEndpoi ntFactory.java:73) at org.apache.cxf.frontend.AbstractWSDLBasedEndpointFactory.initializeSe rviceFactory(AbstractWSDLBasedEndpointFactory.java:228) at org.apache.cxf.frontend.ServerFactoryBean.initializeServiceFactory(Se rverFactoryBean.java:157) at org.apache.cxf.frontend.AbstractWSDLBasedEndpointFactory.createEndpoi nt(AbstractWSDLBasedEndpointFactory.java:99) at org.apache.cxf.frontend.ServerFactoryBean.create(ServerFactoryBean.ja va:117) at org.apache.cxf.dosgi.dsw.handlers.PojoConfigurationTypeHandler.create Server(PojoConfigurationTypeHandler.java:107) at org.apache.cxf.dosgi.dsw.hooks.ServiceHookUtils.createServer(ServiceH ookUtils.java:89) at org.apache.cxf.dosgi.dsw.hooks.CxfPublishHook.createServer(CxfPublish Hook.java:106) at org.apache.cxf.dosgi.dsw.hooks.CxfPublishHook.publishEndpoint(CxfPubl ishHook.java:80) at org.apache.cxf.dosgi.dsw.Activator$1.run(Activator.java:143) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source ) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.FileNotFoundException: class path resource [META-INF/cxf/cxf. xml] cannot be opened because it does not exist at org.springframework.core.io.ClassPathResource.getInputStream(ClassPat hResource.java:142) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) ... 30 more Highlighter throws StringIndexOutOfBoundsException Using the canonical Solr example (ant run-example) I added this document (using exampledocs/post.sh): <add><doc> <field name="id">Test for Highlighting StringIndexOutOfBoundsExcdption</field> <field name="name">Some Name</field> <field name="manu">Acme, Inc.</field> <field name="features">Description of the features, mentioning various things</field> <field name="features">Features also is multivalued</field> <field name="popularity">6</field> <field name="inStock">true</field> </doc></add> and then the URL http://localhost:8983/solr/select/?q=features&hl=true&hl.fl=features caused the exception. I have a patch. I don't know if it is completely correct, but it avoids this exception. Javadoc-dev ant target runs out of heap space The default configuration for the ant task javadoc-dev does not specify a maxmemory and, after churning for a while, fails with an OOM exception: {noformat} [javadoc] Constructing Javadoc information... [javadoc] Standard Doclet version 1.6.0_07 [javadoc] Building tree for all the packages and classes... [javadoc] java.lang.OutOfMemoryError: Java heap space [javadoc] at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:424) [javadoc] at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:406) [javadoc] at java.util.HashMap.put(HashMap.java:385) [javadoc] at sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:118) [javadoc] at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:97) [javadoc] at sun.util.resources.OpenListResourceBundle.handleGetObject(OpenListResourceBundle.java:58) [javadoc] at sun.util.resources.TimeZoneNamesBundle.handleGetObject(TimeZoneNamesBundle.java:59) [javadoc] at java.util.ResourceBundle.getObject(ResourceBundle.java:378) [javadoc] at java.util.ResourceBundle.getObject(ResourceBundle.java:381) [javadoc] at java.util.ResourceBundle.getStringArray(ResourceBundle.java:361) [javadoc] at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:100) [javadoc] at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:81) [javadoc] at java.util.TimeZone.getDisplayNames(TimeZone.java:399) [javadoc] at java.util.TimeZone.getDisplayName(TimeZone.java:350) [javadoc] at java.util.Date.toString(Date.java:1025) [javadoc] at com.sun.tools.doclets.formats.html.markup.HtmlDocWriter.today(HtmlDocWriter.java:337) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.printHtmlHeader(HtmlDocletWriter.java:281) [javadoc] at com.sun.tools.doclets.formats.html.ClassWriterImpl.writeHeader(ClassWriterImpl.java:122) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildClassHeader(ClassBuilder.java:164) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildClassDoc(ClassBuilder.java:124) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) {noformat} table.jsp either 500s out or doesnt list the regions The table.jsp page either 500 errors out if you are viewing a .META. or -ROOT- table, or for user tables it doesn't list the regions. The new Blackbird console makes Safari JavaScript completely non-functional Not clear why, but all JavaScript in Safari, including client validation and Ajax, appears to be broken. Camel -> JBI endpoint fails to find converters when deployed into SMX 4 The SA which can work in Serivcemix3.x can't work in ServiceMix 4.x, since servicemix-camel component complains it can't find the TypeConverter. org.apache.camel.NoTypeConversionAvailableException: No type converter available to convert from type: class org.apache.camel.component.file.FileMessage to the required type: javax.xml.transform.Source with value FileMessage: C:\test\di10\camelin\pvsw-engine-test-sa-1.0.pom Missing attribute for @JSFValidator and @JSFConverter on myfaces-builder-annotations The attribute serialuidtag is available on myfaces- builder-plugin doclets but not on annotations for @JSFValidator and @JSFConverter. The ideal is that the plugin generates it but right now the plugin does not have way to decide when generate one file or not, so it just generate all files each time it runs. Also the attribute clazz is not available on @JSFValidator, so we need to define in to make the goal make-validators work using annotations. On the wire headers are dropped inside camel route between two CxfEndpoints Currently if there is a Camel route that involves two or more cxf endpoints, then the on the wire message headers such as SOAP headers are dropped. This fix enables one to relay these headers along the route or preserve the old behaviour and drop the headers. Headers relay/drop is bidirectional. Both out-of-band (*not* defined in WSDL contract) and in-band (defined in WSDL contract) headers are supported. Relaying headers can be further customized by implementing additional logic inside of MessageHeadersRelay interface. The default behaviour is to relay headers provided that an instance of MessageHeadersRelay bound to message binding namespace allows a header to be relayed. Please see .../components/camel-cxf/src/test/java/org/apache/camel/component/cxf/soap/headers/CxfMessageHeadersRelayTest.java for details on how this is done. Attached is the patch that provides this functionality. Thanks, Marat Ruby C extension doesn't build on 1.8.5 The C extension fails like so: gcc -I. -I. -I/usr/local/lib/ruby/1.8/i686-darwin8.9.1 -I/Users/kev/code/thrift/lib/rb/ext -DHAVE_STRLCPY -fno-common -g -O2 -Wall -Werror -c binary_protocol_accelerated.c cc1: warnings being treated as errors binary_protocol_accelerated.c: In function 'write_string_direct': binary_protocol_accelerated.c:79: warning: implicit declaration of function 'RSTRING_LEN' binary_protocol_accelerated.c: In function 'read_byte_direct': binary_protocol_accelerated.c:227: warning: implicit declaration of function 'RSTRING_PTR' binary_protocol_accelerated.c:227: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c: In function 'read_i16_direct': binary_protocol_accelerated.c:232: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c:232: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c: In function 'read_i32_direct': binary_protocol_accelerated.c:237: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c:238: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c:239: error: subscripted value is neither array nor pointer binary_protocol_accelerated.c:240: error: subscripted value is neither array nor pointer This is because RSTRING_LEN and RSTRING_PTR (and also RARRAY_LEN) aren't defined by Ruby (yet). Those macros were added to make it easier for alternate implementations to support C extensions. Anyway. We can fix them in an ifdef (one for each). RSTRING_LEN is equivalent to (if we want to cast..) {code}RSTRING(rb_String(some_macro_var))->len{code}. RSTIRNG_PTR is equivalent to {code}RSTRING(rb_String(some_macro_var))->ptr{code}. RARRAY_LEN is equivalent to {code}RARRAY(rb_Array(some_macro_var))->len{code}. Should probably be put in macros.h and included in struct.c, memory_buffer.c, and binary_protocol_accelerated.c [Hive] union all queries broken - all kinds of problems 1. Map-only job : same input Hangs because mapper tries to same open twice, and hadoop filesystem complains. Fix: Only initialize once - keep state at the Operator level for the same. Should do same for Close. 2. Map-only job : different inputs Loss of data due to rename. Fix: change rename to move files to the directory. 3. Map-only job in subquery + RedSink: works currently 4. 2 variables: so 4 sub-cases Number of sub-queries having map-reduce jobs. (1/2) Operator after Union (RS/FS) a. Number of sub-queries having map-reduce jobs. 1 Operator after Union: RS Can be done in 2MR - really difficult with current infrastructure. Should do with 3 MR jobs now - break on top of UNION. Future optimization: move operators between Union and RS before Union. b. Number of sub-queries having map-reduce jobs. 2 Operator after Union: RS Needs 3MR - Should do with 3 MR jobs - break on top of UNION. Future optimization: move operators between Union and RS before Union. c. Number of sub-queries having map-reduce jobs. 1 Operator after Union: FS Can be done in 1MR - really difficult with current infrastructure. Can be easily done with 2 MR by removing UNION and cloning operators between Union and FS. Should do with 3 MR jobs now - break on top of UNION. Followup optimization: 2MR should be able to handle d. Number of sub-queries having map-reduce jobs. 2 Operator after Union: FS Can be easily done with 2 MR by removing UNION and cloning operators between Union and FS. Should do with 3 MR jobs now - break on top of UNION. Followup optimization: 2MR should be able to handle Error in marketingPermissionService Error when marketingPermissionService is called. This is because of wrong location of genericBasePermissionCheck in marketing services.xml ====================== Error running simple method [genericBasePermissionCheck] in XML file [component://marketing/script/org/ofbiz/common/permission/CommonPermissionServices.xml]: (Could not find SimpleMethod XML document in resource: component://marketing/script/org/ofbiz/common/permission/CommonPermissionServices.xml) ====================== CRC errors not detected reading intermediate output into memory with problematic length It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory: {code} int n = input.read(shuffleData, 0, shuffleData.length); while (n > 0) { bytesRead += n; n = input.read(shuffleData, bytesRead, (shuffleData.length-bytesRead)); } {code} Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums. HistoryViewer throws IndexOutOfBoundsException when there are files or directories not confrming to log file name convention When running history viewer in local mode (specifying file:///<path/to/hodlogs> as path to logs), it throws IndexOutOfBoundsException due to the following code: {code} String[] jobDetails = JobInfo.decodeJobHistoryFileName(jobFiles[0].getName()).split("_"); trackerHostName = jobDetails[0]; trackerStartTime = jobDetails[1]; {code} The reason is because there are some directories under the log directories that do not conform to the log file naming convention, and the length of the jobDetails array is 1. History viewer should be more defensive and ignore (possibly with warning) files or directories that it does not recognize. CAMEL-1424 BeanInfo.overridesExistingMethod() doesn't handle overloaded methods correctly. Camel can fail to determine the appropriate method to call on a bean that has overloaded (vs. overridden) methods. It will always call the first overloaded method, even if the parameter is not the same type as that of the message being processed. The bug is in BeanInfo.overridesExistingMethod. Here's the offending code: for (int i = 0; i < info.getMethod().getParameterTypes().length; i++) { Class type1 = info.getMethod().getParameterTypes()[i]; Class type2 = methodInfo.getMethod().getParameterTypes()[i]; if (!type1.equals(type2)) { continue; } } // same name, same parameters, then its overrides an existing class return info; If the parameter types don't match, the continue statement is not going to do what you'd want. The author obviously intended the "continue" to continue with the next methodInfo. Instead, it checks the next parameter and will always return the current methodInfo if it reaches this point. Here's a unit test that exemplifies the issue: ---------------------------------- package biz.firethorn.hostinterface.camel; import java.lang.reflect.Method; import junit.framework.Assert; import junit.framework.TestCase; import org.apache.camel.CamelContext; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.RuntimeCamelException; import org.apache.camel.component.bean.AmbiguousMethodCallException; import org.apache.camel.component.bean.BeanInfo; import org.apache.camel.component.bean.MethodInvocation; import org.apache.camel.impl.DefaultCamelContext; import org.apache.camel.impl.DefaultExchange; import org.apache.camel.impl.DefaultMessage; public class BeanInfoTest extends TestCase { public void test() throws Exception { CamelContext camelContext = new DefaultCamelContext(); BeanInfo beanInfo = new BeanInfo(camelContext, Bean.class); Message message = new DefaultMessage(); message.setBody(new RequestB()); Exchange exchange = new DefaultExchange(camelContext); exchange.setIn(message); MethodInvocation methodInvocation = beanInfo.createInvocation(new Bean(), exchange); Method method = methodInvocation.getMethod(); Assert.assertEquals("doSomething", method.getName()); Assert.assertEquals(RequestB.class, method.getGenericParameterTypes()[0]); } } class Bean { public void doSomething(RequestA request) { } public void doSomething(RequestB request) { } } class RequestA { public int i; } class RequestB { public String s; } TestCLI fails DiskErrorException in TaskTracker when running a job In particular, this can be reproduced in Windows by running a hadoop example such as PiEstimator. {noformat} org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/pids/attempt_200902021632_0001_m_000002_0 in any of the configured local directories at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:381) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138) at org.apache.hadoop.mapred.TaskTracker.getPidFilePath(TaskTracker.java:430) at org.apache.hadoop.mapred.TaskTracker.removePidFile(TaskTracker.java:440) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.runChild(JvmManager.java:370) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.run(JvmManager.java:338) {noformat} (Have changed TaskTracker.java to print out the trace.) This patch disables usage of setsid and pidfiles on Windows. Support for constructor arg implicit reference with auto index For the below mentioned scenario of implicit references in constructor args.... <constructor-arg><ref bean="savingsAccountService"/></constructor-arg> <constructor-arg><ref bean="stockAccountService"/></constructor-arg> whose type attribute and index attribute is absent, it should be possible to support the same with auto index mechanism. dblook script fails when URL contains special characters This problem was discovered when testing 10.5.1.0-RC1. The dblook shell script fails if the database URL contains characters that have a special meaning to the shell, even if those characters are properly escaped/quoted on the command line. Example: $ ./bin/dblook -d 'jdbc:derby:jar:(demo/databases/toursdb.jar)toursdb' ./bin/dblook[29]: eval: syntax error at line 1: `(' unexpected Connection attributes, like create=true, will be ignored because the semi-colon makes the script execute the setting of the connection attribute as a separate shell command: $ ./bin/dblook -d 'jdbc:derby:NewDatabase;create=true' -- Timestamp: 2009-03-25 09:54:56.169 -- Source database is: NewDatabase -- Connection URL is: jdbc:derby:NewDatabase -- appendLogs: false -- Note: At least one unexpected error/warning message was -- encountered during DDL generation. See dblook.log -- to review the message(s). In dblook.log: java.sql.SQLException: Database 'NewDatabase' not found. at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source) ... Sling JSP Include tag does not take resource super types into account for synthetic resources Consider a repository structure like this /apps/type/base/GET.jsp /apps/type/extended/sling:resourceSuperType = type/base Now including a resource "/content/missing" like <sling:include path="/content/missing" resourceType="type/extended" /> is expected to call the GET.jsp script for a GET request. This does not work, since the SyntheticResource created by the include tag implementation always returns null for the resource super type. JMSBindingProcessor.write() needs to call out extensions (for WireFormats, etc.) We have the WireFormatXXXXProcessor.write()'s implemented, but we don't call them. Maven build fails on Windows The buildnumber plugin in the Droids build fails with the following error on Windows (cygwin): {noformat} $ mvn clean install [INFO] Scanning for projects... [INFO] Reactor build order: [INFO] Droids [INFO] Droids Norobots [INFO] Droids Core [INFO] Droids Spring [INFO] Droids Solr [INFO] Droids Wicket Components [INFO] Droids Tika [INFO] ------------------------------------------------------------------------ [INFO] Building Droids [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean] [INFO] [buildnumber:create {execution: default}] [INFO] Checking for local modifications: skipped. [INFO] Executing: cmd.exe /X /C "svn --non-interactive update c:\src\droids-trunk" [INFO] Working directory: c:\src\droids-trunk [INFO] Svn command failed due to some locks in working copy. We try to run a 'svn cleanup'. [INFO] Executing: cmd.exe /X /C "svn" [INFO] Working directory: c:\src\droids-trunk Provider message: The svn command failed. Command output: svn: Working copy 'c:\src\droids-trunk' locked svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details) Type 'svn help' for usage. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Couldn't update project. Embedded error: Error! [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 7 seconds [INFO] Finished at: Thu Mar 26 14:20:21 CET 2009 [INFO] Final Memory: 10M/18M [INFO] ------------------------------------------------------------------------ {noformat} My checkout is not locked. The OBR ResolverImpl shouldn't try to start fragment bundles If a fragment bundle is deployed via the OBR ResolverImpl the resolver tries to start this one. This results in an exception: --- 8< --- ERROR: Resolver: Start error - XXX org.osgi.framework.BundleException: Fragment bundles can not be started. at org.apache.felix.framework.Felix._startBundle(Felix.java:1614) at org.apache.felix.framework.Felix.startBundle(Felix.java:1588) at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:382) at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:363) at org.apache.felix.bundlerepository.ResolverImpl.deploy(ResolverImpl.java:560) at de.brz.stratos.goa.OSGiUtil.startBundles(OSGiUtil.java:61) at de.brz.stratos.goa.GoaFelixLoader.run(GoaFelixLoader.java:53) at de.brz.stratos.goa.GoaFelixLoader.main(GoaFelixLoader.java:121) --- 8< --- Kristian AMQ-2241 Memory leak in ConnectionStateTracker (cannot remove connections, sessions or consumers) After inspecting the heap dump of our application we saw that the ConcurrentHashMap connectionStates in org.apache.activemq.state.ConnectionStateTracker in org.apache.activemq.transport.failover.FailoverTransport consumes very much heap space. The ConnectionStateTracker is an implementation of a CommandVisitor, but the function to remove connections from the connectionStates ConcurrentHashMap in processRemoveConnection() is never called, as it does not correspond to the CommandVisitor interface. The same applies to the processRemoveConsumer() and processRemoveSession() functions. I attached a patch to fix those bugs and remove the memory leak in ConnectionStateTracker. Config Admin throwing NPE BJ is working on a test case for DS that uses CM, which is resulting in the following exception: java.lang.NullPointerException at org.apache.felix.cm.impl.ConfigurationManager.getCachedConfiguration(ConfigurationManager.java:281) at org.apache.felix.cm.impl.ConfigurationManager.bundleChanged(ConfigurationManager.java:509) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:941) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:220) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:330) He says, "It seems CM is trying to clean up unbound configurations and barfing." Let me know if more information is needed. iPOJO Static binding policy is not compliant with the Declarative Service static binding policy. iPOJO static policy does not stop and start the instance when a static service dependency is broken. The DS specification mentions that the component has to be deactivated, so component instances are lost. Ruby lib doesn't rescue properly from lack of native_thrift extension Files that try to load thrift_native need to rescue from LoadError. LoadError isn't a StandardError, so isn't caught by a bare rescue. From current HEAD (757825). Clio:rb kev$ g g thrift_native Manifest:ext/thrift_native.c ext/extconf.rb:create_makefile 'thrift_native' ext/thrift_native.c:void Init_thrift_native() { lib/thrift/protocol/binaryprotocolaccelerated.rb:require 'thrift_native' spec/protocol_spec.rb:require "thrift_native" spec/spec_helper.rb:require "thrift_native" We may want to put the require for the extension in a ruby file, and just rescue there. That way load path stuff is taken care of properly, and we get our rescue. duplicated code in (Default)SolrHighlighter and HighlightingUtils A large quantity of code is duplicated between the deprecated HighlightingUtils class and the newer SolrHighlighter and DefaultSolrHighlighter (which have been getting bug fixes and enhancements). The Utils class is no longer used anywhere in Solr, but people writing plugins may be taking advantage of it, so it should be cleaned up. ResponseBuilder implementation sets date headers with Date.toString() method {{expires()}} and {{lastModified()}} methods of {{ResponseBuilderImpl}} sets response headers with {{Date.toString()}} method, while they should obey time format specified by RFC1123. This means that the date should be formated like {code} new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss zzz", Locale.ENGLISH).format(expires); {code} ResponseBuilder implementation returns nulls from various builder methods {{ResponseBuilderImpl}} class in various builder methods like {{expires()}}, {{language()}} returns {{null}} instead of {{this}}, so the proper invocation throws NPE. [hive] problem in group by in case of empty input files Include py packages for thrift service Include php packages for thrift service CalendarButton opens to wrong month after setSelectedDate() has been called What steps will reproduce the problem? 1. Create a CalendarButton 2. call setSelectedDate(new CalendarDate(2008, 0, 0)) 3. Press the button What is the expected output? What do you see instead? You expect the calendar to be opened to January 2008, but instead it's opened to the current month. The general expectation is that when setSelectedDate() is called, the calendar is set to display that month. Task Tracker burns a lot of cpu in calling getLocalCache I noticed that many times, a task tracker max up to 6 cpus. During that time, iostat shows majority of that was system cpu. That situation can last for quite long. During that time, I saw a number of threads were in the following state: java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.getBooleanAttributes0(Native Method) at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228) at java.io.File.exists(File.java:733) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:399) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:407) at org.apache.hadoop.filecache.DistributedCache.getLocalCache(DistributedCache.java:176) at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:140) I suspect that getLocalCache is too expensive. And calling it for every task initialization seems too much waste. Extensionmanager openConnection(URL) method should be public The openConnection(URL) method should be public because otherwise the URLStreamHandlerProxy is getting a no such method exception when it is trying to use it via reflection. This prevents extension bundles from working in case the extension url has been toStringed and is then recreated. Invalid code generated for string constants containing single quotes (') If I declare a constant in a Thrift iDL like this: const string HAMMER_TIME = "Can't touch this"; The generated Python sources generate a constant like this: HAMMER_TIME = 'Can't touch this'; This doesn't work because the apostrophe terminates the string constant. Since Python supports string constants using either ' or ", an easy fix is just to use " around Python constants instead: Index: compiler/cpp/src/generate/t_py_generator.cc =================================================================== --- compiler/cpp/src/generate/t_py_generator.cc (revision 701711) +++ compiler/cpp/src/generate/t_py_generator.cc (working copy) @@ -363,7 +363,7 @@ t_base_type::t_base tbase = ((t_base_type*)type)->get_base(); switch (tbase) { case t_base_type::TYPE_STRING: - out << "'" << value->get_string() << "'"; + out << '"' << value->get_string() << '"'; break; case t_base_type::TYPE_BOOL: out << (value->get_integer() > 0 ? "True" : "False"); comments in JobInProgress related to TaskCommitThread are not valid There are some comments in JobInProgress referring to TaskCommitThread. Since TaskCommitThread is no more present, the comments should be deleted/modified. Requests for jobs/schools fields result in errors Using the js api to request javascript fields for schools or jobs results in errors on the server. These field names need to be translated to the proper backend field name -- organizations. cxf bc provider lost faultstring and faultcode for incoming soap fault message from externel service [hive] 1 reducer should be used if no grouping key is present in all scenarios Component.getVisibleArea() breaks when encountering a Viewport There is a logic error in Component.getVisibleArea() If we do not take start code as a part of region server recovery, we could inadvertantly try to reassign regions assigned to a restarted server with a different start code TupleWritable can return incorrect results if it contains more than 32 values When attempting to do an outer join on 45 files with the CompositeInputFormat, I've been encountering unexpected results in the TupleWritable returned by the record reader. On closer inspection, it seems to be because TupleWritable.setWritten(int) is incorrectly setting some tuple positions as written, i.e when you set setWritten(42), it also sets position 10. The following Junit test demonstrates the problem: {code} public void testWideTuple() throws Exception { Text emptyText = new Text("Should be empty"); Writable[] values = new Writable[64]; Arrays.fill(values,emptyText); values[42] = new Text("Number 42"); TupleWritable tuple = new TupleWritable(values); tuple.setWritten(42); for (int pos=0; pos<tuple.size();pos++) { boolean has = tuple.has(pos); if (pos == 42) { assertTrue(has); } else { assertFalse("Tuple position is incorrectly labelled as set: " + pos, has); } } } {code} Similarly, TupleWritable.setWritten(9) also causes TupleWritable.has(41) to incorrectly return true. Strange list of imported packages returned by the package admin When asking to imported package, the package admin returns a huge list containing all the package exported by the system bundle. This issue has appeared in the current developed version. I get the issue inside the web console but it just relies on the package admin. So, to reproduce it just look at packages imported by any bundles (maybe except the system bundle?). Here is the list I get for a bundle importing nothing: javax.accessibility,version=1.5.0 from org.apache.felix.framework (0) javax.activity,version=1.5.0 from org.apache.felix.framework (0) javax.crypto,version=1.5.0 from org.apache.felix.framework (0) javax.crypto.interfaces,version=1.5.0 from org.apache.felix.framework (0) javax.crypto.spec,version=1.5.0 from org.apache.felix.framework (0) javax.imageio,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.event,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.metadata,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.plugins.bmp,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.plugins.jpeg,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.spi,version=1.5.0 from org.apache.felix.framework (0) javax.imageio.stream,version=1.5.0 from org.apache.felix.framework (0) javax.management,version=1.5.0 from org.apache.felix.framework (0) javax.management.loading,version=1.5.0 from org.apache.felix.framework (0) javax.management.modelmbean,version=1.5.0 from org.apache.felix.framework (0) javax.management.monitor,version=1.5.0 from org.apache.felix.framework (0) javax.management.openmbean,version=1.5.0 from org.apache.felix.framework (0) javax.management.relation,version=1.5.0 from org.apache.felix.framework (0) javax.management.remote,version=1.5.0 from org.apache.felix.framework (0) javax.management.remote.rmi,version=1.5.0 from org.apache.felix.framework (0) javax.management.timer,version=1.5.0 from org.apache.felix.framework (0) javax.naming,version=1.5.0 from org.apache.felix.framework (0) javax.naming.directory,version=1.5.0 from org.apache.felix.framework (0) javax.naming.event,version=1.5.0 from org.apache.felix.framework (0) javax.naming.ldap,version=1.5.0 from org.apache.felix.framework (0) javax.naming.spi,version=1.5.0 from org.apache.felix.framework (0) javax.net,version=1.5.0 from org.apache.felix.framework (0) javax.net.ssl,version=1.5.0 from org.apache.felix.framework (0) javax.print,version=1.5.0 from org.apache.felix.framework (0) javax.print.attribute,version=1.5.0 from org.apache.felix.framework (0) javax.print.attribute.standard,version=1.5.0 from org.apache.felix.framework (0) javax.print.event,version=1.5.0 from org.apache.felix.framework (0) javax.rmi,version=1.5.0 from org.apache.felix.framework (0) javax.rmi.CORBA,version=1.5.0 from org.apache.felix.framework (0) javax.rmi.ssl,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth.callback,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth.kerberos,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth.login,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth.spi,version=1.5.0 from org.apache.felix.framework (0) javax.security.auth.x500,version=1.5.0 from org.apache.felix.framework (0) javax.security.cert,version=1.5.0 from org.apache.felix.framework (0) javax.security.sasl,version=1.5.0 from org.apache.felix.framework (0) javax.sound.midi,version=1.5.0 from org.apache.felix.framework (0) javax.sound.midi.spi,version=1.5.0 from org.apache.felix.framework (0) javax.sound.sampled,version=1.5.0 from org.apache.felix.framework (0) javax.sound.sampled.spi,version=1.5.0 from org.apache.felix.framework (0) javax.sql,version=1.5.0 from org.apache.felix.framework (0) javax.sql.rowset,version=1.5.0 from org.apache.felix.framework (0) javax.sql.rowset.serial,version=1.5.0 from org.apache.felix.framework (0) javax.sql.rowset.spi,version=1.5.0 from org.apache.felix.framework (0) javax.swing,version=1.5.0 from org.apache.felix.framework (0) javax.swing.border,version=1.5.0 from org.apache.felix.framework (0) javax.swing.colorchooser,version=1.5.0 from org.apache.felix.framework (0) javax.swing.event,version=1.5.0 from org.apache.felix.framework (0) javax.swing.filechooser,version=1.5.0 from org.apache.felix.framework (0) javax.swing.plaf,version=1.5.0 from org.apache.felix.framework (0) javax.swing.plaf.basic,version=1.5.0 from org.apache.felix.framework (0) javax.swing.plaf.metal,version=1.5.0 from org.apache.felix.framework (0) javax.swing.plaf.multi,version=1.5.0 from org.apache.felix.framework (0) javax.swing.plaf.synth,version=1.5.0 from org.apache.felix.framework (0) javax.swing.table,version=1.5.0 from org.apache.felix.framework (0) javax.swing.text,version=1.5.0 from org.apache.felix.framework (0) javax.swing.text.html,version=1.5.0 from org.apache.felix.framework (0) javax.swing.text.html.parser,version=1.5.0 from org.apache.felix.framework (0) javax.swing.text.rtf,version=1.5.0 from org.apache.felix.framework (0) javax.swing.tree,version=1.5.0 from org.apache.felix.framework (0) javax.swing.undo,version=1.5.0 from org.apache.felix.framework (0) javax.transaction,version=1.5.0 from org.apache.felix.framework (0) javax.transaction.xa,version=1.5.0 from org.apache.felix.framework (0) javax.xml,version=1.5.0 from org.apache.felix.framework (0) javax.xml.datatype,version=1.5.0 from org.apache.felix.framework (0) javax.xml.namespace,version=1.5.0 from org.apache.felix.framework (0) javax.xml.parsers,version=1.5.0 from org.apache.felix.framework (0) javax.xml.transform,version=1.5.0 from org.apache.felix.framework (0) javax.xml.transform.dom,version=1.5.0 from org.apache.felix.framework (0) javax.xml.transform.sax,version=1.5.0 from org.apache.felix.framework (0) javax.xml.transform.stream,version=1.5.0 from org.apache.felix.framework (0) javax.xml.validation,version=1.5.0 from org.apache.felix.framework (0) javax.xml.xpath,version=1.5.0 from org.apache.felix.framework (0) org.apache.felix.fileinstall,version=0.0.0 from org.apache.felix.fileinstall (3) org.ietf.jgss,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA.DynAnyPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA.ORBPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA.TypeCodePackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA.portable,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA_2_3,version=1.5.0 from org.apache.felix.framework (0) org.omg.CORBA_2_3.portable,version=1.5.0 from org.apache.felix.framework (0) org.omg.CosNaming,version=1.5.0 from org.apache.felix.framework (0) org.omg.CosNaming.NamingContextExtPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.CosNaming.NamingContextPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.Dynamic,version=1.5.0 from org.apache.felix.framework (0) org.omg.DynamicAny,version=1.5.0 from org.apache.felix.framework (0) org.omg.DynamicAny.DynAnyFactoryPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.DynamicAny.DynAnyPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.IOP,version=1.5.0 from org.apache.felix.framework (0) org.omg.IOP.CodecFactoryPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.IOP.CodecPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.Messaging,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableInterceptor,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableInterceptor.ORBInitInfoPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer.CurrentPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer.POAManagerPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer.POAPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer.ServantLocatorPackage,version=1.5.0 from org.apache.felix.framework (0) org.omg.PortableServer.portable,version=1.5.0 from org.apache.felix.framework (0) org.omg.SendingContext,version=1.5.0 from org.apache.felix.framework (0) org.omg.stub.java.rmi,version=1.5.0 from org.apache.felix.framework (0) org.omg.stub.javax.management.remote.rmi,version=1.5.0 from org.apache.felix.framework (0) org.osgi.framework,version=1.4.0 from org.apache.felix.framework (0) org.osgi.framework.hooks.service,version=1.4.0 from org.apache.felix.framework (0) org.osgi.service.cm,version=1.2.0 from org.apache.felix.fileinstall (3) org.osgi.service.log,version=1.3.0 from org.apache.felix.fileinstall (3) org.osgi.service.packageadmin,version=1.2.0 from org.apache.felix.framework (0) org.osgi.service.startlevel,version=1.1.0 from org.apache.felix.framework (0) org.osgi.service.url,version=1.0.0 from org.apache.felix.framework (0) org.osgi.util.tracker,version=1.3.3 from org.apache.felix.framework (0) org.w3c.dom,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.bootstrap,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.css,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.events,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.html,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.ls,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.ranges,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.stylesheets,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.traversal,version=1.5.0 from org.apache.felix.framework (0) org.w3c.dom.views,version=1.5.0 from org.apache.felix.framework (0) org.xml.sax,version=1.5.0 from org.apache.felix.framework (0) org.xml.sax.ext,version=1.5.0 from org.apache.felix.framework (0) org.xml.sax.helpers,version=1.5.0 from org.apache.felix.framework (0) Security update (Link to hidden form change) for Visual Theme selection Please find attached a patch to fix this. Thank you, -Bruno Extension points for implementation.widget When looking at the current implementation of implemention.widget, I noticed that there is code like the following that hardcode the generate JS code for different protocol. I would like to see the code done through extension points, so it can be extended. e.g., here is the code in WidgetImplementatonInvoker.generateJavaScriptReferenceFunction: if(proxyClient.equals("JSONRpcClient")) { pw.println("referenceMap." + referenceName + " = new " + proxyClient + "(\"" + targetURI + "\").Service;"); } else { pw.println("referenceMap." + referenceName + " = new " + proxyClient + "(\"" + targetURI + "\");"); } If there are other places in implementation.widget doing this kind of switch, I would also like to see them done through extension points. Thank you. Allow parementer append for jms bindings with no endpoint See test failures in https://issues.apache.org/activemq/browse/AMQ-2182 Camel module test failures Just as a reminder ... there are test failures in activemq-camel module after upgrading to 2.0 target/surefire-reports/org.apache.activemq.camel.component.ActiveMQJmsHeaderRouteTest.txt:Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.538 sec <<< FAILURE! target/surefire-reports/org.apache.activemq.camel.component.InvokeRequestReplyUsingJmsReplyToHeaderTest.txt:Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 22.466 sec <<< FAILURE! target/surefire-reports/org.apache.activemq.camel.converter.InvokeMessageListenerTest.txt:Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 11.559 sec <<< FAILURE! target/surefire-reports/org.apache.activemq.camel.converter.InvokeMessageListenerTest.txt:testSendTextMessage(org.apache.activemq.camel.converter.InvokeMessageListenerTest) Time elapsed: 11.543 sec <<< FAILURE! SMTP transport can receive more than one message at same time hi all, recently I started some RM tests with the commons mail transport. Mail transport listener runs in a timer task. TimerTask timerTask = new TimerTask() { @Override public void run() { workerPool.execute(new Runnable() { public void run() { if (state == BaseConstants.PAUSED) { if (log.isDebugEnabled()) { log.debug("Transport " + getTransportName() + " poll trigger : Transport is currently paused.."); } } else { poll(entry); } synchronized (entry) { if (!entry.canceled) { schedulePoll(entry, pollInterval); } } } }); } }; entry.timerTask = timerTask; timer.schedule(timerTask, pollInterval) As I saw timer task only re activates only after earlier invocation finish. i.e after completing the message. In RM inorder delivery case lets say we receive message number 2 before 1. then the initial thread does not return and it can not receive the message number 1. I tested this this the following sample. TimerTask timerTask = new TimerTask(){ public void run() { System.out.println("In the timer task"); try { Thread.sleep(60000); } catch (InterruptedException e) { e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates. } System.out.println("Going out of timer task"); } }; Timer timer = new Timer("Testtimer"); timer.schedule(timerTask,0, 1000); And I saw timer task does not run until it finishes the first task. Can we start a new thread to each new message? That is how earlier SMTP transport had done that. When send to JBI fails from an async Camel route, the Camel route will keep on waiting on the AsyncCallback When a Camel route that uses async Camel messaging sends a MessageExchange through JBI, it will use the same, asynchronous messaging style. If the JBI send() method fails (e.g. because the target endpoint is unknown), there will be no callback on the AsyncCallback effectively locking up the thread indefinitely. When send() fails, the failed MessageExchange is left on the Component's list of known exchanges When sending a MessageExchange fails asynchronously (e.g. because the target endpoint could not be found), the MessageExchange is kept on the list of known exchanges. This is a slight memory leak, but the main problem is that it can prevent stopping the endpoint correctly because it will be waiting for the exchange to finish. Update the Chinese password incorrect tip from "" to "" Update the Chinese password incorrect tip from "" to "", is incorrect Ensure that binding.atom.abdera returns ETags with speach marks round them In 1.x the binding.atom.abdera test is failing as it is returning a ETage that isn't enclosed in speech marks. Update the binding to put ETags in speech marks. PanelTab style and styleClass attributes are ignored AtomAgreggator test cases failures The atom agreggator sample has some test cases, that are currently disabled because the test file name (FeedAggregatorTest) is not being picked up by surefire (name should be FeedAggregatorTestCase). Trying to enable the test or running it manually produces couple errors (testFeedBasics and testUnmodifiedGetIfModified) related to invalid tags. org.apache.abdera.parser.ParseException: java.lang.IllegalArgumentException: Invalid Entity Tag at org.apache.abdera.protocol.client.AbstractClientResponse.getDocument(AbstractClientResponse.java:132) at org.apache.abdera.protocol.client.AbstractClientResponse.getDocument(AbstractClientResponse.java:96) at org.apache.abdera.protocol.client.AbstractClientResponse.getDocument(AbstractClientResponse.java:74) at feed.FeedAggregatorTest.testFeedBasics(FeedAggregatorTest.java:116) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.IllegalArgumentException: Invalid Entity Tag at org.apache.abdera.util.EntityTag.parse(EntityTag.java:56) at org.apache.abdera.protocol.util.AbstractResponse.getEntityTag(AbstractResponse.java:58) at org.apache.abdera.protocol.client.AbstractClientResponse.getDocument(AbstractClientResponse.java:120) ... 27 more C++ unit tests won't run on Windows The C++ unit_tests test suite won't run correctly on Windows. The main issue is that the SocketProxy class, used as a relay between client and broker that can be programmed to drop data or close a connection at an inopportune time, uses the Poller class in a way that's incompatible with the Windows IocpPoller. The Poller class is used to react to events on the sockets being used, but the SocketProxy class then expects to perform the needed send/recv operations directly on the Socket classes. However, the Windows Poller class reacts to I/O completions, not possibilities, so it's not compatible with the approach taken by SocketProxy. I tried replacing this with AsynchIO use... too messy and leaky. I have an approach working that uses select() instead of the Poller. It's portable, even if a bit trickier to use correctly than Poller. Patch forthcoming. MetricDataLoader should close JDBC connection MetricDataLoader is not closing the JDBC connection clean code to fix some compiler warnings Some trivial code cleanup to fix compiler warns. Brazilian Analyzer doesn't remove stopwords when uppercase is given The order of filters matter here, just need to apply lowercase token filter before removing stopwords result = new StopFilter( result, stoptable ); result = new BrazilianStemFilter( result, excltable ); // Convert to lowercase after stemming! result = new LowerCaseFilter( result ); Lowercase must come before BrazilianStemFilter At the end of day I'll attach a patch, it's straightforward In Postgrql "BLOB" type not exist but is possible to create a "bytea" column Mail from frank Lupo: In Postgrql "BLOB" type not exist but is possible to create a "bytea" column. in DBDatabaseDriverPostgreSQL.java case BLOB: sql.append("BLOB"); if (c.getSize() > 0) sql.append(" (" + String.valueOf((long) c.getSize()) + ") "); break; change in case BLOB: sql.append("bytea"); break; http://www.postgresql.org/docs/8.3/static/datatype.html http://jdbc.postgresql.org/documentation/80/binary-data.html second mail: Hi, in DBDatabaseDriverPostgreSQL add the reserved keyword see http://www.postgresql.org/docs/current/static/sql-keywords-appendix.html /** * Constructor for the PostgreSQL database driver.<br> */ public DBDatabaseDriverPostgreSQL() { // Default Constructor // list of reserved keywords // http://www.postgresql.org/docs/current/static/sql-keywords-appendix.html reservedSQLKeywords.add("ALL".toLowerCase()); reservedSQLKeywords.add("ANALYSE".toLowerCase()); reservedSQLKeywords.add("ANALYZE".toLowerCase()); reservedSQLKeywords.add("AND".toLowerCase()); reservedSQLKeywords.add("ANY".toLowerCase()); reservedSQLKeywords.add("ARRAY".toLowerCase()); reservedSQLKeywords.add("AS".toLowerCase()); reservedSQLKeywords.add("ASC".toLowerCase()); reservedSQLKeywords.add("ASYMMETRIC".toLowerCase()); reservedSQLKeywords.add("AUTHORIZATION".toLowerCase()); reservedSQLKeywords.add("BETWEEN".toLowerCase()); reservedSQLKeywords.add("BINARY".toLowerCase()); reservedSQLKeywords.add("BOTH".toLowerCase()); reservedSQLKeywords.add("CASE".toLowerCase()); reservedSQLKeywords.add("CAST".toLowerCase()); reservedSQLKeywords.add("CHECK".toLowerCase()); reservedSQLKeywords.add("COLLATE".toLowerCase()); reservedSQLKeywords.add("COLUMN".toLowerCase()); reservedSQLKeywords.add("CONSTRAINT".toLowerCase()); reservedSQLKeywords.add("CREATE".toLowerCase()); reservedSQLKeywords.add("CROSS".toLowerCase()); reservedSQLKeywords.add("CURRENT_DATE".toLowerCase()); reservedSQLKeywords.add("CURRENT_ROLE".toLowerCase()); reservedSQLKeywords.add("CURRENT_TIME".toLowerCase()); reservedSQLKeywords.add("CURRENT_TIMESTAMP".toLowerCase()); reservedSQLKeywords.add("CURRENT_USER".toLowerCase()); reservedSQLKeywords.add("DEFAULT".toLowerCase()); reservedSQLKeywords.add("DEFERRABLE".toLowerCase()); reservedSQLKeywords.add("DESC".toLowerCase()); reservedSQLKeywords.add("DISTINCT".toLowerCase()); reservedSQLKeywords.add("DO".toLowerCase()); reservedSQLKeywords.add("ELSE".toLowerCase()); reservedSQLKeywords.add("END".toLowerCase()); reservedSQLKeywords.add("EXCEPT".toLowerCase()); reservedSQLKeywords.add("FALSE".toLowerCase()); reservedSQLKeywords.add("FOR".toLowerCase()); reservedSQLKeywords.add("FOREIGN".toLowerCase()); reservedSQLKeywords.add("FREEZE".toLowerCase()); reservedSQLKeywords.add("FROM".toLowerCase()); reservedSQLKeywords.add("FULL".toLowerCase()); reservedSQLKeywords.add("GRANT".toLowerCase()); reservedSQLKeywords.add("GROUP".toLowerCase()); reservedSQLKeywords.add("HAVING".toLowerCase()); reservedSQLKeywords.add("ILIKE".toLowerCase()); reservedSQLKeywords.add("IN".toLowerCase()); reservedSQLKeywords.add("INITIALLY".toLowerCase()); reservedSQLKeywords.add("INNER".toLowerCase()); reservedSQLKeywords.add("INTERSECT".toLowerCase()); reservedSQLKeywords.add("INTO".toLowerCase()); reservedSQLKeywords.add("IS".toLowerCase()); reservedSQLKeywords.add("ISNULL".toLowerCase()); reservedSQLKeywords.add("JOIN".toLowerCase()); reservedSQLKeywords.add("LEADING".toLowerCase()); reservedSQLKeywords.add("LEFT".toLowerCase()); reservedSQLKeywords.add("LIKE".toLowerCase()); reservedSQLKeywords.add("LIMIT".toLowerCase()); reservedSQLKeywords.add("LOCALTIME".toLowerCase()); reservedSQLKeywords.add("LOCALTIMESTAMP".toLowerCase()); reservedSQLKeywords.add("NATURAL".toLowerCase()); reservedSQLKeywords.add("NEW".toLowerCase()); reservedSQLKeywords.add("NOT".toLowerCase()); reservedSQLKeywords.add("NOTNULL".toLowerCase()); reservedSQLKeywords.add("NULL".toLowerCase()); reservedSQLKeywords.add("OFF".toLowerCase()); reservedSQLKeywords.add("OFFSET".toLowerCase()); reservedSQLKeywords.add("OLD".toLowerCase()); reservedSQLKeywords.add("ON".toLowerCase()); reservedSQLKeywords.add("ONLY".toLowerCase()); reservedSQLKeywords.add("OR".toLowerCase()); reservedSQLKeywords.add("ORDER".toLowerCase()); reservedSQLKeywords.add("OUTER".toLowerCase()); reservedSQLKeywords.add("OVERLAPS".toLowerCase()); reservedSQLKeywords.add("PLACING".toLowerCase()); reservedSQLKeywords.add("PRIMARY".toLowerCase()); reservedSQLKeywords.add("REFERENCES".toLowerCase()); reservedSQLKeywords.add("RETURNING".toLowerCase()); reservedSQLKeywords.add("RIGHT".toLowerCase()); reservedSQLKeywords.add("SELECT".toLowerCase()); reservedSQLKeywords.add("SESSION_USER".toLowerCase()); reservedSQLKeywords.add("SIMILAR".toLowerCase()); reservedSQLKeywords.add("SOME".toLowerCase()); reservedSQLKeywords.add("SYMMETRIC".toLowerCase()); reservedSQLKeywords.add("TABLE".toLowerCase()); reservedSQLKeywords.add("THEN".toLowerCase()); reservedSQLKeywords.add("TO".toLowerCase()); reservedSQLKeywords.add("TRAILING".toLowerCase()); reservedSQLKeywords.add("TRUE".toLowerCase()); reservedSQLKeywords.add("UNION".toLowerCase()); reservedSQLKeywords.add("UNIQUE".toLowerCase()); reservedSQLKeywords.add("USER".toLowerCase()); reservedSQLKeywords.add("USING".toLowerCase()); reservedSQLKeywords.add("VERBOSE".toLowerCase()); reservedSQLKeywords.add("WHEN".toLowerCase()); reservedSQLKeywords.add("WHERE".toLowerCase()); reservedSQLKeywords.add("WITH".toLowerCase()); } hadoop commands seem extremely slow in 0.20 branch hadoop dfs get, rm, -mkdir- ,cp, mv, ls, etc mydir/fileA mydir/fileB mydir/fileC ... seem to be very slow in 0.20 branch. DataImportHandler should load the data-config.xml using UTF-8 encoding Wrongly encoded data may be indexed if the data-config.xml contains unicode characters and the default encoding is not UTF-8. Spin-off from http://www.lucidimagination.com/search/document/85b187a544fdc333/encoding_problem Method invocation timeout causes odd stack trace I'm making a method call into libvirt-qpid and get an error: Task action processing failed: RuntimeError: Type Object has no attribute 'seq' /usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1052:in `method_missing' /usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1094:in `invoke' /usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1035:in `method_missing' ./task_storage.rb:154:in `connect' ./taskomatic.rb:590:in `task_refresh_pool' ./taskomatic.rb:835:in `mainloop' ./taskomatic.rb:809:in `each' ./taskomatic.rb:809:in `mainloop' ./taskomatic.rb:785:in `loop' ./taskomatic.rb:785:in `mainloop' ./taskomatic.rb:874 I think this is happening because it's timed out, but the error path has a bug in it. ant clean target should really clean The ant clean target leaves build/cassandra.jar in place, and shouldn't. Technically, it should delete the entire build/ directory. When there are no decorator defaults in root folder, causes stack trace in browser During testing and usage, I suddenly started getting errors like: java.lang.NullPointerException org.apache.jetspeed.util.Path.splitPath(Path.java:288) org.apache.jetspeed.util.Path.<init>(Path.java:100) org.apache.jetspeed.util.Path.addSegment(Path.java:339) org.apache.jetspeed.decoration.DecorationFactoryImpl.getLayoutDecorationBasePath(DecorationFactoryImpl.java:469) I discovered that the PSML root folder.metadata had no default decorators. I am not sure if I somehow I deleted this XML element with the customizer (I don't remember doing so), or if its a bug in the system. I have never seen this bug in the 2.1.x branch, so I will keep an eye out for it happening again. This work provided to fix this issue will: * make the decorator factory defaults more robust, so that it can be configured to fallback to a system-wide default if it fails to find defaults anywhere else (like in the desktop) * configure a fatal exception handler in the Jetpeed Servlet so our end users do not have to view a stack trace File Portlet cleanup The File Portlet was long overdue for a rehaul. Now that full Portlet API 2.0 support is in place, we can make use of 2.0 features in the implementation. Slimmed down the impl to support the most important cases from users over time such as language support, configurable storing of content in webapp or file system, and file server pipeline. The file server pipeline has been decoupled from jetspeed so that a standard portlet request attribute passed on from the portal can be used to locate a file. This is useful for file server pipelines (like in Jetspeed) but is not limited to Jetspeed portal or coupled to Jetspeed in any way. hfile doesn't recycle decompressors The Compression codec stuff from hadoop has the concept of recycling compressors and decompressors - this is because a compression codec uses "direct buffers" which reside outside the JVM regular heap space. There is a risk that under heavy concurrent load we could run out of that 'direct buffer' heap space in the JVM. HFile does not call algorithm.returnDecompressor and returnCompressor. We should fix that. I found this bug via OOM crashes under jdk 1.7 - it appears to be partially due to the size of my cluster (200gb, 800 regions, 19 servers) and partially due to weaknesses in JVM 1.7. Multi-threaded Singleton Lazy Instantiation Issue Old code foolishly used an "if (instance == null) instance = createInstance()" which is not thread safe -- my bad :). Current code now uses Atomics and Futures to guarantee only one thread will win the right to lazily construct the singleton. avoid UploadRequestWrapper setCharacterEncoding WARNING when current encoding is same as the "new" In upload case we use a wrapper which "setCharacterEncoding()" throws a warning when the request param map was already request before. Mar 24, 2009 2:29:18 AM org.apache.myfaces.trinidadinternal.config.upload.UploadRequestWrapper setCharacterEncoding WARNING: Unable to set request character encoding to UTF-8, because request parameters have already been read. caller stack: -> org.apache.myfaces.trinidadinternal.config.upload.UploadRequestWrapper.setCharacterEncoding(UploadRequestWrapper.java:89) -> org.apache.myfaces.trinidadinternal.context.external.ServletExternalContext._initHttpServletRequest(ServletExternalContext.java:665) -> org.apache.myfaces.trinidadinternal.context.external.ServletExternalContext.setRequest(ServletExternalContext.java:514) -> org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:244) -> org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:157) -> org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92) The behavior is correct, since it is (according to the servlet spec) illegal to set the character encoding after parameters have been retrieved. Even this is a pretty annoying restriction. However in almost all case the encoding has already been set to the desired one and when the current encoding is equal to the new one, we could suppress the WARNING Assertion oversight in TestLibService There is a small but quite nasty oversight in TestLibService class: assertNotNull("could not find the reference from " + bName + "'s volunteer status back to " + bName, volunteer.getBorrower() == borrower); I suspect the intention was to have assertTrue(...). The above compiles because of autoboxing creating a Boolean but the assertion is always true, even if volunteer.getBorrower() != borrower. IndexWriter does not do the right thing when a Thread is interrupt()'d Spinoff from here: http://www.nabble.com/Deadlock-with-concurrent-merges-and-IndexWriter--Lucene-2.4--to22714290.html When a Thread is interrupt()'d while inside Lucene, there is a risk currently that it will cause a spinloop and starve BG merges from completing. Instead, when possible, we should allow interruption. But unfortunately for back-compat, we will need to wrap the exception in an unchecked version. In 3.0 we can change that to simply throw InterruptedException. FreeMarkerUtil.setAttribute bug when scope="application" FreeMarkerUtil.setAttribute does not work when scope="application", because of a wrong string comparison. CamelContext.getEndpoint - returns null if scheme not defined properly If you for instance forget to add a colon as the scheme name, then Camel cannot find a component and will return null. This happens typically when you mistype an URI: eg {{activemq.queue.foo}} instead of correct {{activemq:queue.foo}} iBATIS-SqlMaps-2_en.pdf is in svn trunk is outdated The pdf version of the developers guide in subversion is outdated. Please export the open office file as pdf and upload it. Bread crumbs and renamed and deleted pages Pages that are renamed or deleted remain in the list of pages visited (the bread crumbs). If a users clicks on such a page, he receives a message that the page does not exists and is asked to create the page. It would be more user friendly if renamed and deleted pages would be removed from the list of visited sites. Unbalanced # in SQL causes unclear/misleading error message Below is the stack trace for in exception caused by an unbalanced # in an insert statement. The "Error parsing XPath 'blah'" message does not give nearly as much information as the parser has available at the time and does not even indicate the source of the problem. I have traced the issue back to the use of StringTokenizer in InlineParameterMapParser, if this class was changed slightly, to catch NoSuchElementException and rethrow SqlMapException with a message like "Failed to parse statement [xpath] with id [statement id], check inline parameters" or even "Failed to parse SQL in [statement id]" finding the problem would be almost effortless. All it needs is a big try-catch block around the whole method body. This code appears to be an failed effort to do this, StringTokenizer picks up the sntax error before this: if (!PARAMETER_TOKEN.equals(token)) { throw new SqlMapException("Unterminated inline parameter in mapped statement (" + "statement.getId()" + ")."); } I just lost a good hour trying to find the source of one of these problems, I traced the iBatis code to figure out where a NoSuchElementEx _could_ be thrown to figure out what would be wrong after 20 minutes of reading my XML/SQL. Calling the bug major might seem a bit melodramatic, but I think it is important that this gets out in a maint. release soon. -- Error occurred. Cause: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMapConfig/sqlMap'. Cause: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMapConfig/sqlMap'. Cause: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.xml.NodeletException: Error parsing XML. Cause: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException Caused by: com.ibatis.common.exception.NestedRuntimeException: Error parsing XPath '/sqlMap/insert'. Cause: java.util.NoSuchElementException Caused by: java.util.NoSuchElementException org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:535) org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:433) Generics for AjaxFallbackDefaultDataTable Please make AjaxFallbackDefaultDataTable generic Fraction.comparTo returns 0 for some differente fractions If two different fractions evaluate to the same double due to limited precision, the compareTo methode returns 0 as if they were identical. {code} // value is roughly PI - 3.07e-18 Fraction pi1 = new Fraction(1068966896, 340262731); // value is roughly PI + 1.936e-17 Fraction pi2 = new Fraction( 411557987, 131002976); System.out.println(pi1.doubleValue() - pi2.doubleValue()); // exactly 0.0 due to limited IEEE754 precision System.out.println(pi1.compareTo(pi2)); // display 0 instead of a negative value {code} zookeeper client wont reconnect if there is a problem my regionserver got wedged: 2009-03-02 15:43:30,938 WARN org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper: Failed to create /hbase: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:87) at org.apache.zookeeper.KeeperException.create(KeeperException.java:35) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:482) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureExists(ZooKeeperWrapper.java:219) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.ensureParentExists(ZooKeeperWrapper.java:240) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper.checkOutOfSafeMode(ZooKeeperWrapper.java:328) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:783) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:468) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:443) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:518) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:477) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:450) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionLocation(HConnectionManager.java:295) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionLocationForRowWithRetries(HConnectionManager.java:919) at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:950) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1370) at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1314) at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:1294) at org.apache.hadoop.hbase.RegionHistorian.add(RegionHistorian.java:237) at org.apache.hadoop.hbase.RegionHistorian.add(RegionHistorian.java:216) at org.apache.hadoop.hbase.RegionHistorian.addRegionSplit(RegionHistorian.java:174) at org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:607) at org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplitThread.java:174) at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:107) this message repeats over and over. Looking at the code in question: private boolean ensureExists(final String znode) { try { zooKeeper.create(znode, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); LOG.debug("Created ZNode " + znode); return true; } catch (KeeperException.NodeExistsException e) { return true; // ok, move on. } catch (KeeperException.NoNodeException e) { return ensureParentExists(znode) && ensureExists(znode); } catch (KeeperException e) { LOG.warn("Failed to create " + znode + ":", e); } catch (InterruptedException e) { LOG.warn("Failed to create " + znode + ":", e); } return false; } We need to catch this exception specifically and reopen the ZK connection. ChukwaLog4jAppender does not escape \n for exception Incorrect error message when importing a bad XML schema file Incorrect error message when importing a bad XML schema file. The message says "An error has occured when saving the schema ...". Instead of "saving" it should be "loading" or "importing" Typo in a warning of the New ObjectClass wizard ('attribute type' instead of 'object class') Typo in a warning of the New ObjectClass wizard ('attribute type' instead of 'object class'). This occurs when the OC does not have any name. Here's the warning message: "The attribute type does not have any name. It is recommanded to add at least one name." PortletDefinition Language needs to indicate if its locale is a supported-locale as defined by or for the Portlet descriptor The Portlet API 2.0 TCK has a check for no supported/specified locales in the portlet descriptor. This means that PortletConfig.getSupportedLocales() should in that case return an empty Enumeration. As Jetspeed maps and stores the supported locales and the (possibly) provided predefined portlet "PortletInfo" elements from a resource bundle in its Language OM, we need to keep track if a Language is created for such a supported-locale definition by or for the Portlet descriptor. Most specifically, the "default" (English) Language is always created and on the fly by Jetspeed (and should not be allowed to be removed!) using either the inline PortletInfo definition in the portlet descriptor or, if a ResourceBundle is provided, taking overrides from its English ResourceBundle. If however there is no <supported-locale>en</supported-locale> defined, this "default" Language may not be used to represent the formally supported locales. This will be implemented by adding a supportedLocale boolean on Language and only set to true for those locales as specified by <supported-locale/> definitions in the portlet descriptor. WebApplication is not thread-safe Instance of class org.apache.wicket.protocol.http.WebApplication is not thread safe being shared among several sessions. Concurrent access to it leads to errors because of the following: 1. bufferedResponses field is initialized with a simple HashMap which is not thread safe and can be corrupted when different threads call addBufferedResponse, popBufferedResponse or sessionDestroyed methods concurrently. Here is the stack trace: [27.03.09 20:55:26:669 MSK] 0000009c RequestCycle E org.apache.wicket.RequestCycle logRuntimeException <Null Message> java.util.ConcurrentModificationException at java.util.HashMap$AbstractMapIterator.checkConcurrentMod(Unknown Source) at java.util.HashMap$AbstractMapIterator.makeNext(Unknown Source) at java.util.HashMap$KeyIterator.next(Unknown Source) at java.util.HashMap.analyzeMap(Unknown Source) at java.util.HashMap.rehash(Unknown Source) at java.util.HashMap.rehash(Unknown Source) at java.util.HashMap.putImpl(Unknown Source) at java.util.HashMap.put(Unknown Source) at org.apache.wicket.protocol.http.WebApplication.addBufferedResponse(WebApplication.java:639) at org.apache.wicket.protocol.http.WebRequestCycle.redirectTo(WebRequestCycle.java:201) at org.apache.wicket.request.target.component.PageRequestTarget.respond(PageRequestTarget.java:58) at org.apache.wicket.request.AbstractRequestCycleProcessor.respond(AbstractRequestCycleProcessor.java:104) at org.apache.wicket.RequestCycle.processEventsAndRespond(RequestCycle.java:1181) at org.apache.wicket.RequestCycle.step(RequestCycle.java:1252) at org.apache.wicket.RequestCycle.steps(RequestCycle.java:1353) at org.apache.wicket.RequestCycle.request(RequestCycle.java:493) at org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:355) at org.apache.wicket.protocol.http.WicketServlet.doPost(WicketServlet.java:145) at javax.servlet.http.HttpServlet.service(HttpServlet.java:738) at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1443) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1384) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:131) 2. Class org.apache.wicket.util.collections.MostRecentlyUsedMap is not thread-safe and can be courrupted when different threads call addBufferedResponse, popBufferedResponse concurrently. camel-mail - Better re connect in case some servers throw exception in isConnected test In case some mail servers throws exception for {{isConnected}} test. We should better cater for that with a try .. catch and re connect in case of exception. Race condition in command-line kill for a task The race condition occurs in following sequence of events: 1. User issues a command-line kill for a RUNNING map-task. JT stores the task in tasksToKill mapping. 2. TT reports the task status as SUCCEEDED. 3. JT creates a TaskCompletionEvent as SUCCEEDED. Also sends a killTaskAction. 4. Reducers fail fetching the map output. 5. finally, the task would fail with Fetch failures. After HADOOP-4759, the task is left as FAILED_UNCLEAN task, since the task is present in tasksToKill mapping. [classlib][luni] check for invalid socket before I/O operations new socket I/O code does not check before read/write operations for invalid sockets (it used to). This may cause a crash if we are trying to write to a closed/reset socket. Maven Bundle Plugin throws NPE when a dependent non-OSGI jar has no manifest Here's the stack trace: [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:584) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:500) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:479) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:331) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:292) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:301) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException at org.apache.tuscany.maven.bundle.plugin.ModuleBundlesBuildMojo.execute(ModuleBundlesBuildMojo.java:538) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:453) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:559) ... 16 more Caused by: java.lang.NullPointerException at org.apache.tuscany.maven.bundle.plugin.BundleUtil.getBundleSymbolicName(BundleUtil.java:76) at org.apache.tuscany.maven.bundle.plugin.ModuleBundlesBuildMojo.execute(ModuleBundlesBuildMojo.java:397) ... 18 more Text based apache license at the bottom of every console pages There are text based apache licenses at the bottom of every console pages like this: ----------------------------------- * * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ ----------------------------------------------- It's not a big but an annoying problem from user's perspective. Bundle resolving runs extreme long Hi I encountered problems while resolving rependencies via the bundle repository. Here is the scenario: I have a simple obr file with a resource definition which has an unresolved dependency. In this file the resource with the name "org.springframework.core" has a requirement for the "org.apache.commons.logging". When I start felix with the obr repository location poniting to that file and type 'obr start com.kkoehler.osgi.repo-test' I'm gettiing the following: --- 8< --- Unsatisfied requirement(s): --------------------------- (&(package=org.springframework.context)(version>=2.5.0)) Unnamed - com.kkoehler.osgi:repo-test:bundle:1.0-SNAPSHOT (&(package=org.apache.commons.logging)(version>=1.0.4)(!(version>=2.0.0))) Spring Context Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Beans Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core Spring Core --- 8< --- I seems to me that felix tries to resolve the bundle "Spring Core" more than once ;-) The wrong unsatisfied dependency information can easily be fixed when checking for existing information in the current list before added it (org.apache.felix.bundlerepository.ResolverImpl). But I think this is only a workaround for the problem of 'double resolving' (I also tried with a larger project and the resolving seems to run 'endless'). In the ResolverImpl I found a statement which 'causes' my problem but there is also a comment for the code. --- 8< --- // If the resource did not resolve, then remove it from // the resolve set, since to keep it consistent for iterative // resolving, such as what happens when determining the best // available candidate. if (!result) { m_resolveSet.remove(resource); } --- 8< --- Removing the line solved my problem but I'm not sure if I'm running in new ones... Thanks Kristian get_columns_in fails when when routed to a node that isn't the home for the key get_columns_in fails when the request cannot be satisfied locally. What steps will reproduce the problem? 1. Insert multiple columns in some row R in a Cassandra cluster that contains more than 1 node. 2. Submit a get_columns_in query to a bunch of nodes. Using the python thrift interface, this is something like: ./Cassandra-remote -h node0:9160 get_columns_in 'Mailbox' 'rowid123' 'HeaderList' "['col1','col2']" ./Cassandra-remote -h node1:9160 get_columns_in 'Mailbox' 'rowid123' 'HeaderList' "['col1','col2']" I've traced the error to a bug in how ReadMessage.java gets de-serialized. See attached unit-test to reproduce this. I'm also attaching a patch that fixes this. Ar doesn't delete correct When working on the Testcases i figured out that a deletion from an Ar Archive is not as successful as it look at first glance. For example: my bla.ar file contains test1.xml and test2.xml. I delete test2.xml The "getNextEntry" Method just delivers test1.xml. Looks correct. But checking the result file at commandline brings the following: $> ar -t /tmp/dir26673/bla.ar test1.xml test2.xml vi shows me that there is still the test2.xml entry in the archive, even when getNextEntry returns null. Deleting test2.xml and adding test.txt afterward brings the following: $> ar -t /tmp/dir24825/bla.ar test.txt ar: /tmp/dir24825/bla.ar: Inappropriate file type or format Add a configuration constant to turn on/off the logging of missing properties by OGNL Right now OGNLValueStack logs insane amount of useless information, when properties are missing(see below). A new config constant will be added, "struts.ognl.logMissingProperties" which will be false by default. like: 2009-03-30 12:03:11,043 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.apache.catalina.jsp_file] 2009-03-30 12:03:11,122 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,122 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,137 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,137 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,137 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,137 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,153 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,200 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,215 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,231 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,247 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,247 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,247 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,247 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,262 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,262 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,247 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,262 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [__sitemesh__filterapplied] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [struts.actionMapping] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [templateDir] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [templateDir] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [#attr.templateDir] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [org.mortbay.jetty.included] 2009-03-30 12:03:11,278 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [theme] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [theme] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [#attr.theme] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [templateDir] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [templateDir] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [#attr.templateDir] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [theme] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [theme] 2009-03-30 12:03:11,293 WARN (com.opensymphony.xwork2.ognl.OgnlValueStack:45) - Could not find property [#attr.theme] Namenode permits directory destruction on overwrite The FSNamesystem's startFileInternal allows overwriting of directories. That is, if you have a directory named /foo/bar and you try to write a file named /foo/bar, the file is written and the directory disappears. This is most apparent for folks using libhdfs directly, as overwriting is always turned on. Therefore, if libhdfs applications do not check the existence of a directory first, then they will permit new files to destroy directories. Docstrings for interface methods are improperly indented Docstrings are not properly indented for interface methods, breaking Python's syntax, and thus, services can't be imported. Closing a consumer does not unblock receive call calling the close method of a cosumer while this has a pending receive call blocked does not unblock returning null. sample code (C#) using System; using Apache.NMS; using Apache.NMS.ActiveMQ; using System.Threading; namespace simpleConsumer { class Program { private static bool _exit = false; private static IMessageConsumer _consumer; static void Main(string[] args) { Apache.NMS.ActiveMQ.ConnectionFactory connectionFactory = new ConnectionFactory("tcp://172.18.141.102:61616"); Apache.NMS.IConnection connection = connectionFactory.CreateConnection(); connection.Start(); Apache.NMS.ISession session = connection.CreateSession(); Apache.NMS.ActiveMQ.Commands.ActiveMQTopic inputTopic = new Apache.NMS.ActiveMQ.Commands.ActiveMQTopic("test.topic"); _consumer = session.CreateConsumer(inputTopic, "2>1"); Thread _receiveThread = new Thread(_receiveLoop); _receiveThread.Start(); while (!_exit) { String command = Console.ReadLine(); if (command == "exit") { _exit = true; } } _consumer.Close(); _receiveThread.Join(); } private static void _receiveLoop() { while (!_exit) { Apache.NMS.ActiveMQ.Commands.ActiveMQTextMessage message = (Apache.NMS.ActiveMQ.Commands.ActiveMQTextMessage)_consumer.Receive(); Console.WriteLine(message.Content.ToString() + " [looping...]"); } } } } Erlang assumes that field types are correct and de-synchronizes if they are not Patch in a moment. TBinaryProtocol In Erlang always reads booleans as true Title says it all. Fix ready: http://gitweb.thrift-rpc.org/?p=thrift.git;a=commitdiff;h=d86b629 The test code depends on the patches for 126 and 127. set_http_options in thrift_http_transport doesn't work NPE in AbstractJAXWSMethodInvoker This issue was reported by our JBossWS-CXF integation user, see http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4219792#4219792 To fix the issue, apply the same proposed fix from that user, i.e.: < if (sm != null) { < Iterator<?> iter = list.iterator(); < while (iter.hasNext()) < { < sm.getHeaders().add((Header) iter.next()); < } --- > Iterator<?> iter = list.iterator(); > while (iter.hasNext()) > { > sm.getHeaders().add((Header) iter.next()); DefaultObjectStreamFactory needs Application during deserialization During session replication deserialization is likely to happen outside request thread JavaScript error from IE during tab abjustment Adding and removing tabs under IE can cause JavaScript errors when attempting to update the Tab container "scrollLeft" property. Apparently there are circumstances where gadgets.TabSet.adjustNavigation() can be invoked and the tabsContainer_.scrollWidth property is not writable. The following patch appears to fix the issue: Index: tabs/tabs.js =================================================================== --- tabs/tabs.js (revision 709235) +++ tabs/tabs.js (working copy) @@ -503,19 +503,51 @@ this.leftNavContainer_.style.display = 'none'; this.rightNavContainer_.style.display = 'none'; if (this.tabsContainer_.scrollWidth <= this.tabsContainer_.offsetWidth) { - this.tabsContainer_.scrollLeft = 0; + if(this.tabsContainer_.scrollLeft) { + // to avoid JS error in IE + this.tabsContainer_.scrollLeft = 0; + } return; } FilterImpl from Felix Framework does not support non-standard LDAP operators Hi as discussed on the user mailing list (http://www.mail-archive.com/users@felix.apache.org/msg03402.html) the framework doesn't support non-standard LDAP operators (see also the RFC-0112 Bundle Repository). The filter impl throws an exception while parsing a repository file containing filters with this syntax. Kristian sample stack trace --- 8< --- ERROR: Error parsing repository metadata org.osgi.framework.InvalidSyntaxException: expected ~=|>=|<= at org.apache.felix.framework.FilterImpl.<init>(FilterImpl.java:81) at org.apache.felix.framework.BundleContextImpl.createFilter(BundleContextImpl.java:102) at org.apache.felix.bundlerepository.RequirementImpl.setFilter(RequirementImpl.java:57) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.felix.bundlerepository.metadataparser.XmlCommonHandler.startElement(XmlCommonHandler.java:490) at org.apache.felix.bundlerepository.metadataparser.kxmlsax.KXml2SAXParser.parseXML(KXml2SAXParser.java:67) at org.apache.felix.bundlerepository.RepositoryImpl.parseRepositoryFile(RepositoryImpl.java:256) at org.apache.felix.bundlerepository.RepositoryImpl.access$000(RepositoryImpl.java:44) at org.apache.felix.bundlerepository.RepositoryImpl$1.run(RepositoryImpl.java:75) at java.security.AccessController.doPrivileged(Native Method) at org.apache.felix.bundlerepository.RepositoryImpl.<init>(RepositoryImpl.java:71) at org.apache.felix.bundlerepository.RepositoryImpl.<init>(RepositoryImpl.java:60) at org.apache.felix.bundlerepository.RepositoryAdminImpl.initialize(RepositoryAdminImpl.java:206) at org.apache.felix.bundlerepository.RepositoryAdminImpl.discoverResources(RepositoryAdminImpl.java:126) at org.apache.felix.bundlerepository.ObrCommandImpl.list(ObrCommandImpl.java:210) at org.apache.felix.bundlerepository.ObrCommandImpl.execute(ObrCommandImpl.java:104) at org.apache.felix.shell.impl.Activator$ShellServiceImpl.executeCommand(Activator.java:276) at org.apache.felix.shell.tui.Activator$ShellTuiRunnable.run(Activator.java:167) at java.lang.Thread.run(Thread.java:619) WARNING: RepositoryAdminImpl: Exception creating repository file:/home/kkoehler/repository.xml. Repository is skipped. java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.felix.bundlerepository.metadataparser.XmlCommonHandler.startElement(XmlCommonHandler.java:490) at org.apache.felix.bundlerepository.metadataparser.kxmlsax.KXml2SAXParser.parseXML(KXml2SAXParser.java:67) at org.apache.felix.bundlerepository.RepositoryImpl.parseRepositoryFile(RepositoryImpl.java:256) at org.apache.felix.bundlerepository.RepositoryImpl.access$000(RepositoryImpl.java:44) at org.apache.felix.bundlerepository.RepositoryImpl$1.run(RepositoryImpl.java:75) at java.security.AccessController.doPrivileged(Native Method) at org.apache.felix.bundlerepository.RepositoryImpl.<init>(RepositoryImpl.java:71) at org.apache.felix.bundlerepository.RepositoryImpl.<init>(RepositoryImpl.java:60) at org.apache.felix.bundlerepository.RepositoryAdminImpl.initialize(RepositoryAdminImpl.java:206) at org.apache.felix.bundlerepository.RepositoryAdminImpl.discoverResources(RepositoryAdminImpl.java:126) at org.apache.felix.bundlerepository.ObrCommandImpl.list(ObrCommandImpl.java:210) at org.apache.felix.bundlerepository.ObrCommandImpl.execute(ObrCommandImpl.java:104) at org.apache.felix.shell.impl.Activator$ShellServiceImpl.executeCommand(Activator.java:276) at org.apache.felix.shell.tui.Activator$ShellTuiRunnable.run(Activator.java:167) at java.lang.Thread.run(Thread.java:619) Caused by: org.osgi.framework.InvalidSyntaxException: expected ~=|>=|<= at org.apache.felix.framework.FilterImpl.<init>(FilterImpl.java:81) at org.apache.felix.framework.BundleContextImpl.createFilter(BundleContextImpl.java:102) at org.apache.felix.bundlerepository.RequirementImpl.setFilter(RequirementImpl.java:57) ... 18 more Watchdog is using $CHUKWA_HOME while PidFile.java is using CHUKWA_PID_DIR PidFile is using this logic to found where to create the pid file. String chukwaPath = System.getProperty("CHUKWA_HOME"); StringBuffer pidFilesb = new StringBuffer(); String pidDir = System.getenv("CHUKWA_PID_DIR"); if (pidDir == null) { pidDir = chukwaPath + File.separator + "var" + File.separator + "run"; } Watchdog should do something similar instead of using CHUKWA_HOME. Also, watchdog is difficult to maintain so will be good to create one function that is doing all the necessary steps and call that function for all processes that watchdog should watch for. STFlow doesn't work with servicemix-http/servicemix-cxf-bc if we configure smx container only use STFlow <sm:flows> <sm:stFlow /> </sm:flows> then serveral demos working with default SedaFlow now doesn't work for wsdl-first and bridge demo, when send request from client.xml, the exception on the console is java.lang.Exception: HTTP request has timed out for exchange: ID:127.0.0.1-12045e7a176-14:0 at org.apache.servicemix.http.processors.ConsumerProcessor.process(ConsumerProcessor.java:119) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:514) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.common.endpoints.SimpleEndpoint.done(SimpleEndpoint.java:85) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncTargetResponse(Pipeline.java:460) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:314) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:514) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.jms.multiplexing.MultiplexingProviderProcessor.process(MultiplexingProviderProcessor.java:134) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncTransformerResponse(Pipeline.java:434) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:311) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:514) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.common.endpoints.ProviderEndpoint.process(ProviderEndpoint.java:114) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncProvider(Pipeline.java:360) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:308) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.http.processors.ConsumerProcessor.process(ConsumerProcessor.java:164) at org.apache.servicemix.http.HttpBridgeServlet.doPost(HttpBridgeServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:741) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:213) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522) ERROR - HttpComponent - Error setting exchange status to ERROR javax.jbi.messaging.MessagingException: illegal call to send / sendSync at org.apache.servicemix.jbi.messaging.MessageExchangeImpl.handleSend(MessageExchangeImpl.java:614) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:386) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:58) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.common.endpoints.SimpleEndpoint.done(SimpleEndpoint.java:85) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncTargetResponse(Pipeline.java:460) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:314) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:514) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.jms.multiplexing.MultiplexingProviderProcessor.process(MultiplexingProviderProcessor.java:134) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncTransformerResponse(Pipeline.java:434) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:311) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:514) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.common.endpoints.ProviderEndpoint.process(ProviderEndpoint.java:114) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.common.endpoints.SimpleEndpoint.send(SimpleEndpoint.java:70) at org.apache.servicemix.eip.patterns.Pipeline.processAsyncProvider(Pipeline.java:360) at org.apache.servicemix.eip.patterns.Pipeline.processAsync(Pipeline.java:308) at org.apache.servicemix.eip.EIPEndpoint.process(EIPEndpoint.java:166) at org.apache.servicemix.common.AsyncBaseLifeCycle.doProcess(AsyncBaseLifeCycle.java:540) at org.apache.servicemix.common.AsyncBaseLifeCycle.processExchange(AsyncBaseLifeCycle.java:492) at org.apache.servicemix.common.BaseLifeCycle.onMessageExchange(BaseLifeCycle.java:46) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.processInBound(DeliveryChannelImpl.java:623) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.doRouting(AbstractFlow.java:172) at org.apache.servicemix.jbi.nmr.flow.st.STFlow.doSend(STFlow.java:49) at org.apache.servicemix.jbi.nmr.flow.AbstractFlow.send(AbstractFlow.java:123) at org.apache.servicemix.jbi.nmr.DefaultBroker.sendExchangePacket(DefaultBroker.java:283) at org.apache.servicemix.jbi.security.SecuredBroker.sendExchangePacket(SecuredBroker.java:88) at org.apache.servicemix.jbi.container.JBIContainer.sendExchange(JBIContainer.java:882) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.doSend(DeliveryChannelImpl.java:396) at org.apache.servicemix.jbi.messaging.DeliveryChannelImpl.send(DeliveryChannelImpl.java:432) at org.apache.servicemix.common.EndpointDeliveryChannel.send(EndpointDeliveryChannel.java:79) at org.apache.servicemix.http.processors.ConsumerProcessor.process(ConsumerProcessor.java:164) at org.apache.servicemix.http.HttpBridgeServlet.doPost(HttpBridgeServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:741) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:213) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522) and for cxf-wsdl-first demo, the impl class inside cxf se can be invoked, but just hang there after send response to cxf bc, turn on the debug log can see cxfse hang for waiting exchange to be answered 2009-03-27 10:13:23,245 [btpool0-0 ] DEBUG DeliveryChannelImpl - SendSync ID:127.0.0.1-12045b36baa-4:0 in DeliveryChannel{servicemix-cxfse} 2009-03-27 10:13:23,245 [btpool0-0 ] DEBUG STFlow - Called Flow send 2009-03-27 10:13:23,253 [btpool0-0 ] DEBUG DeliveryChannelImpl - Notifying exchange ID:127.0.0.1-12045b36baa-4:0(fd0a62) in DeliveryChannel{servicemix-cxfbc} from processInboundSynchronousExchange 2009-03-27 10:13:23,253 [btpool0-0 ] DEBUG DeliveryChannelImpl - Waiting for exchange ID:127.0.0.1-12045b36baa-4:0 (b4b0a4) to be answered in DeliveryChannel{servicemix-cxfse} from sendSync Thumbprint references do not added correctly for signing When <sp:RequireThumbprintReference/> in X509Token Ramaprt sends an empty KeyInfo for Signature token. two different xmlsec version jars in servicemix-shared NPE in Shell.runCommand() I have seen one of the task failures with following exception: java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:441) at org.apache.hadoop.util.Shell.runCommand(Shell.java:149) at org.apache.hadoop.util.Shell.run(Shell.java:134) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286) at org.apache.hadoop.util.ProcessTree.isAlive(ProcessTree.java:244) at org.apache.hadoop.util.ProcessTree.sigKillInCurrentThread(ProcessTree.java:67) at org.apache.hadoop.util.ProcessTree.sigKill(ProcessTree.java:115) at org.apache.hadoop.util.ProcessTree.destroyProcessGroup(ProcessTree.java:164) at org.apache.hadoop.util.ProcessTree.destroy(ProcessTree.java:180) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.kill(JvmManager.java:377) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType.reapJvm(JvmManager.java:249) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType.access$000(JvmManager.java:113) at org.apache.hadoop.mapred.JvmManager.launchJvm(JvmManager.java:76) at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:411) Authorizers not consulted at login From the jspwiki-dev list: Steve Dahl wrote: Under JSPWiki 2.6.4, we've replaced WebContainerAuthorizer with an LDAPAuthorizer which implements JSPWiki roles in terms of LDAP groups. When I compile this for JSPWiki 2.8.0, and modify the jspwiki.properties file to use it, our custom LDAPAuthorizer gets initialized, and is sent findRole(), but it never seems to get sent isUserInRole(). If it's useful information, LDAPAuthorizer implements Authorizer (not WebAuthorizer), and it implements isUserInRole() with this signature: public boolean isUserInRole( WikiSession session, Principal role ) Is there anything that has changed in Authorizers between 2.6.4 and 2.8.0 that might explain this? Looking deeper, it seems that in JSPWiki 2.6.X, WikiSession implemented injectRolePrincipals(), which initialized the session with whatever groups and roles the user belongs to. Groups are read from the group database, and Roles are read from the Authorizer. In JSPWiki 2.8.X, injectRolePrincipals() has been replaced by injectGroupPrincipals(), which reads groups from the group database but doesn't use the Authorizer. What is the Authorizer used for now? As a side note, I originally implemented LDAPAuthorizer as LDAPGroupDatabase. I ended up rejecting this approach because GroupManager assumes that the members of a Group can be read once when the Wiki is started, and that the Group's membership will only be modified by the Wiki. The problem with LDAP is that the group membership can be modified from outside, and the only way to update the wiki would be to manually restart it. The Authorizer was a better solution for our purposes, because if a user was added to the LDAP group, the Authorizer would reflect that change as soon as the user logged out and back in. Restarting the wiki is not necessary. Replacing indexed attribute always grows JDBM database The JDBM database for an indexed attribute grows each time an attribute of that type is replaced. This was a bug in JDBM that has not made its way to our JDBM branch yet. The fix is: http://jdbm.cvs.sourceforge.net/viewvc/jdbm/jdbm/src/main/jdbm/btree/BTree.java?r1=1.17&r2=1.18 The in-memory storage back end doesn't work on Windows Bug reported by Knut Magne Solem, see DERBY-646. Using the in-memory storage back end fails on Windows (i.e. connect 'jdbc:derby:memory:MyDbTest;create=true'; from ij): ERROR XJ001: Java exception: 'ASSERT FAILED serviceName = memory:C:\Documents and Settings\user\workspace\derby\MyDbTest;storageFactory.getCanonicalName() = C:\Documents and Settings\user\workspace\derby\MyDbTest: org.apache.derby.shared.common.sanity.AssertFailure'. With an insane build, the error messages will look like this: ERROR XJ041: Failed to create database 'memory:myDB', see the next exception for details. ERROR XBM01: Startup failed due to an exception. See next exception for details. ERROR XSTB2: Cannot log transaction changes, maybe trying to write to a read only database. The error occurs during boot, which means Windows users are unable to use the in-memory back end at all. Javadoc jar file does not contain legal files Donald's fix to use the ianal plugin revealed a problem in the javadoc plugin. The javadoc plugin does not include elements from the <resources> tags in pom.xml. The normal jar plugin and source plugin do include these resources. As a result we have no legal files in the javadoc jar and we're failing the ianal check. Quick browsing of the javadoc plugin mailing lists suggest a few workarounds for this problem which I'll test during the 1.2.1 release. After the release I'll migrate the changes to the other affected branches. WorkspaceInfo.dispose() does not deregister SharedItemStateManager from virtual item state providers Automatic disposal of idle workspaces frees unused workspaces but corresponding SharedItemStateManager (and releated PersistenceManager) is still kept in memory referenced by virtual item state providers, this can lead to memory leaks. Camel route should avoid returning null out message when receiving InOut exchange When a Camel provider endpoint receives an InOut exchange, it depends on the Camel route to fill in the out message. Camel itself is far less strict in its MEP handling and it doesn't necessarily provide a value there. This will result in sending back the InOut exchange without an out NormalizedMessage, potentially confusing other endpoints. If no out message has been set, I think copying the resulting in message would make a sensible default. This is also what Camel itself does by default when an out-capable exchange doesn't have the out message set. Useless META-INF dir in the JEE server assembly In the JEE server assembly, there is a useless META-INF dir in the root. missing jar dependency for activemq-xmpp there is one missing jar dependency which will prevent users of activemq following the documentation (http://activemq.apache.org/xmpp.html) from connecting successfully. the bigger problem is that the connector itself does not complain about any missing jar but behaves like in a real connection process (debugging with psi leaves you in a state of "wtf"). the missing jar is the wstx-asl-3.0.1.jar (see pom.xml dependencies in the activemq-xmpp). this should at least be mentioned immediately in the documentation cause the feature itself (xmpp in conjunction with the agent topic) is awesome. greets, jochen Tests fail on Hudson When running "ant test" a number of tests fail. For example testSingeColumn, testManyColumns, testOpen etc. Part of stacktrace: java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.config.DatabaseDescriptor [testng] at org.apache.cassandra.service.StorageService.<clinit>(StorageService.java:449) See: http://hudson.zones.apache.org/hudson/job/Cassandra/4/ HTTP conduit configuration is not loaded when not using Spring Removing Spring, the default configuration is not loaded anymore with the JaxWS FrontEnd. Code: Service service = Service.create(SERVICE); service.addPort(PORT, SOAPBinding.SOAP11HTTP_BINDING, url); client = service.getPort(PORT, TheService.class); HTTPConduit httpConduit = (HTTPConduit) cProxy.getClient().getConduit(); <-- Exception java.lang.RuntimeException: Could not find conduit initiator for transport http://schemas.xmlsoap.org/soap/http at org.apache.cxf.binding.soap.SoapTransportFactory.getConduit(SoapTransportFactory.java:148) at org.apache.cxf.endpoint.AbstractConduitSelector.getSelectedConduit(AbstractConduitSelector.java:73) at org.apache.cxf.endpoint.UpfrontConduitSelector.selectConduit(UpfrontConduitSelector.java:71) at org.apache.cxf.endpoint.ClientImpl.getConduit(ClientImpl.java:448) Cloned SegmentReaders fail to share FieldCache entries I just hit this on LUCENE-1516, which returns a cloned readOnly readers from IndexWriter. The problem is, when cloning, we create a new [thin] cloned SegmentReader for each segment. FieldCache keys directly off this object, so if you clone the reader and do a search that requires the FieldCache (eg, sorting) then that first search is always very slow because every single segment is reloading the FieldCache. This is of course a complete showstopper for LUCENE-1516. With LUCENE-831 we'll switch to a new FieldCache API; we should ensure this bug is not present there. We should also fix the bug in the current FieldCache API since for 2.9, users may hit this. LocalJobRunner does not run jobs using new MapReduce API {noformat} java.lang.ClassCastException: org.apache.hadoop.mapred.FileSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:55) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:412) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:510) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:303) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:140) {noformat} You can't invoke a table function which is stored in a jar file inside the database You get a ClassNotFoundException when you try to invoke a table function which lives in a jar file stored inside the database. This is because FromVTI.implementsDerbyStyleVTICosting() looks up the class using Class.forName() rather than using the session classloader. A similar bug is in FromVTI.getVTICosting(). This bug was reported by Krzysztof N in the following user list thread: http://www.nabble.com/Uinable-to-use-Table-function-due-to-java.lang.ClassNotFoundException--while-class-is-clearly-reachable..-td22478383.html#a22699492 AMQNET-93 Calling MessageProducer.Send without destination should throw more specific exception. When sending a message on a messageproducer when the destination has not been set, a somewhat obscure exception is thrown. See stack trace below. Following the JMS spec, (see the javadocs) an exception of the type UnsupportedOperationException - if a client uses this method with a MessageProducer that did not specify a destination at creation time. InvalidDestinationException - if a client uses this method with a MessageProducer with an invalid destination. would be much better. I am using a build taken from svn on 1/14/2009. Apache.NMS.ActiveMQ.BrokerException: java.lang.NullPointerException : at Apache.NMS.ActiveMQ.Transport.ResponseCorrelator.Request(Command command, TimeSpan timeout) in d:\Hudson\jobs\Apache.NMS.ActiveMQ Trunk\workspace\src\main\csharp\Transport\ResponseCorrelator.cs:line 105 at Apache.NMS.ActiveMQ.Connection.SyncRequest(Command command, TimeSpan requestTimeout) in d:\Hudson\jobs\Apache.NMS.ActiveMQ Trunk\workspace\src\main\csharp\Connection.cs:line 333 at Apache.NMS.ActiveMQ.Session.DoSend(Command message, TimeSpan requestTimeout) in d:\Hudson\jobs\Apache.NMS.ActiveMQ Trunk\workspace\src\main\csharp\Session.cs:line 478 at Apache.NMS.ActiveMQ.MessageProducer.Send(IDestination destination, IMessage message, Boolean persistent, Byte priority, TimeSpan timeToLive, Boolean specifiedTimeToLive) in d:\Hudson\jobs\Apache.NMS.ActiveMQ Trunk\workspace\src\main\csharp\MessageProducer.cs:line A first chance exception of type 'System.IO.EndOfStreamException' occurred in mscorlib.dll The thread 0x1e54 has exited with code 0 (0x0). The thread 0x5d4 has exited with code 0 (0x0). The thread 0x1208 has exited with code 0 (0x0). A first chance exception of type 'System.IO.EndOfStreamException' occurred in mscorlib.dll The thread 0x1978 has exited with code 0 (0x0). The thread 0x1ed4 has exited with code 0 (0x0). The thread 0x1ba0 has exited with code 0 (0x0). A first chance exception of type 'System.IO.EndOfStreamException' occurred in mscorlib.dll The thread 0x1758 has exited with code 0 (0x0). 172 at Apache.NMS.ActiveMQ.MessageProducer.Send(IMessage message) in d:\Hudson\jobs\Apache.NMS.ActiveMQ Trunk\workspace\src\main\csharp\MessageProducer.cs:line 120 at Spring.NmsQuickStart.Client.Gateways.RequestReplyNmsTemplate.<>c__DisplayClass2.<ConvertAndSendRequestReply>b__0(ISession session, IMessageProducer producer) in L:\projects\spring-net\trunk\examples\Spring\Spring.NmsQuickStart\src\Spring\Spring.NmsQuickStart.Client\Gateways\RequestReplyNmsTemplate.cs:line 17 DFSClient does not treat write timeout of 0 properly {{dfs.datanode.socket.write.timeout}} is used for sockets to and from datanodes. It is 8 minutes by default. Some users set this to 0, effectively disabling the write timeout (for some specific reasons). When this is set to 0, DFSClient sets the timeout to 5 seconds by mistake while writing to DataNodes. This is exactly the opposite of real intention of setting it to 0 since 5 seconds is too short. ClassCastException on RPC-Literal Byte Array JAXWS Unmarshalling of ByteArray fails with a class cast exception, below is the stack trace for the issue: java.lang.ClassCastException: [Ljava.lang.Byte; incompatible with [B [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at $Proxy50.retBase64Binary(Unknown Source) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at org.tempuri.CustomBinding_IBaseDataTypesRpcLitProxy.retBase64Binary(CustomBinding_IBaseDataTypesRpcLitProxy.java:97) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.sampleClient.SampleClient.buildRetByteArray(SampleClient.java:409) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.sampleClient.SampleClient.testRetByteArray(SampleClient.java:1021) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.sampleClient.SampleClient.CallService(SampleClient.java:166) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.sampleClient.SampleClient.CallFromServlet(SampleClient.java:114) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.wstest.bp.ClientServlet.processRequest(ClientServlet.java:88) [12/15/08 7:46:20:749 EST] 00000024 SystemErr R at com.ibm.wstest.bp.ClientServlet.doGet(ClientServlet.java:36) I will provide a fix for this issue. Remove deprecated NetUtils.getServerAddress This deprecated method was introduced in release 0.16 via HADOOP-2404. It should be removed in 0.17 Features Javascript has lots and lots of jslint errors I ran jslint (using just browser, eqeqeq, undef, sub) on the features tree (as I did routinely with our internal javascript for 0.7) and I was amazed to see literally hundreds of errors. This patch tries to fix the biggest problems (usage of undefined fields, != vs. !==, == vs. ===, lots of missing semicolons). It probably needs some discussion. Set soTimeout on the ChukwaAgentController in order to prevent dead lock on the remote application BinaryProtocolAccelerated does not behave properly when strict reads are turned off There's an error in BinaryProtocolAccelerated that causes read_message_begin to return the wrong message header when strict read = false. TerminatorThread is logging as FileAdaptor TerminatorThread was created from FileAdaptor copy and paste code. The logger should change to TerminatorThread. Currently the jobComplete's method on the job instrumentation class is called only for Job that succeed. need to have similar functionnality for failed Job cf: HADOOP-5565 Pipelined data not loaded for proxied renders for "default' view The following gadget should work: <?xml version="1.0" encoding="UTF-8" ?> <Module> <ModulePrefs title="Pipeline Demo"/> <Content authz="signed" href="..." xmlns:os="http://ns.opensocial.org/2008/markup"> <os:PeopleRequest key="vf" userId="@viewer" groupId="@friends" /> </Content> </Module> ... but doesn't if you pass "view=canvas" on the URL. The workaround is adding an explicit @view attribute on <Content>. ... View Full Document

End of Preview

Sign up now to access the rest of the document